久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

<legend id='vnUzr'><style id='vnUzr'><dir id='vnUzr'><q id='vnUzr'></q></dir></style></legend>
      • <bdo id='vnUzr'></bdo><ul id='vnUzr'></ul>

    1. <small id='vnUzr'></small><noframes id='vnUzr'>

        <i id='vnUzr'><tr id='vnUzr'><dt id='vnUzr'><q id='vnUzr'><span id='vnUzr'><b id='vnUzr'><form id='vnUzr'><ins id='vnUzr'></ins><ul id='vnUzr'></ul><sub id='vnUzr'></sub></form><legend id='vnUzr'></legend><bdo id='vnUzr'><pre id='vnUzr'><center id='vnUzr'></center></pre></bdo></b><th id='vnUzr'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='vnUzr'><tfoot id='vnUzr'></tfoot><dl id='vnUzr'><fieldset id='vnUzr'></fieldset></dl></div>
        <tfoot id='vnUzr'></tfoot>

        為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 var

        Why Kafka jdbc connect insert data as BLOB instead of varchar(為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接)
      1. <tfoot id='0Fr8Y'></tfoot>

              <tbody id='0Fr8Y'></tbody>
            <legend id='0Fr8Y'><style id='0Fr8Y'><dir id='0Fr8Y'><q id='0Fr8Y'></q></dir></style></legend>

                <small id='0Fr8Y'></small><noframes id='0Fr8Y'>

                <i id='0Fr8Y'><tr id='0Fr8Y'><dt id='0Fr8Y'><q id='0Fr8Y'><span id='0Fr8Y'><b id='0Fr8Y'><form id='0Fr8Y'><ins id='0Fr8Y'></ins><ul id='0Fr8Y'></ul><sub id='0Fr8Y'></sub></form><legend id='0Fr8Y'></legend><bdo id='0Fr8Y'><pre id='0Fr8Y'><center id='0Fr8Y'></center></pre></bdo></b><th id='0Fr8Y'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='0Fr8Y'><tfoot id='0Fr8Y'></tfoot><dl id='0Fr8Y'><fieldset id='0Fr8Y'></fieldset></dl></div>
                  <bdo id='0Fr8Y'></bdo><ul id='0Fr8Y'></ul>
                  本文介紹了為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我正在使用 Java 生成器在我的 Kafka 主題頂部插入數據.然后我使用 Kafka jdbc connect 將數據插入到我的 Oracle 表中.下面是我的生產者代碼.

                  I am using a Java producer to insert data top my Kafka topic. Then I use Kafka jdbc connect to insert data into my Oracle table. Below is my producer code.

                  package producer.serialized.avro;
                  
                  import org.apache.avro.Schema;
                  import org.apache.avro.generic.GenericData;
                  import org.apache.avro.generic.GenericRecord;
                  import org.apache.kafka.clients.producer.KafkaProducer;
                  import org.apache.kafka.clients.producer.ProducerConfig;
                  import org.apache.kafka.clients.producer.ProducerRecord;
                  
                  import java.util.Properties;
                  
                  
                  public class Sender4 {
                  
                      public static void main(String[] args) {
                  
                          String flightSchema = "{\"type\":\"record\"," + "\"name\":\"Flight\","
                  
                                  + "\"fields\":[{\"name\":\"flight_id\",\"type\":\"string\"},{\"name\":\"flight_to\",\"type\":\"string\"},{\"name\":\"flight_from\",\"type\":\"string\"}]}";                
                  
                          Properties props = new Properties();
                  
                          props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
                          props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);
                          props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,io.confluent.kafka.serializers.KafkaAvroSerializer.class);    
                          props.put("schema.registry.url", "http://192.168.0.1:8081");            
                  
                          KafkaProducer producer = new KafkaProducer(props);    
                  
                          Schema.Parser parser = new Schema.Parser();
                  
                          Schema schema = parser.parse(flightSchema);            
                  
                          GenericRecord avroRecord = new GenericData.Record(schema);
                  
                          avroRecord.put("flight_id", "myflight");
                          avroRecord.put("flight_to", "QWE");
                          avroRecord.put("flight_from", "RTY");    
                  
                          ProducerRecord<String, GenericRecord> record = new ProducerRecord<>("topic9",avroRecord);
                  
                          producer.send(record);
                      }
                  }
                  

                  下面是我的 Kafka 連接屬性

                  Below is my Kafka connect properties

                  name=test-sink-6
                  connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
                  tasks.max=1
                  topics=topic9
                  connection.url=jdbc:oracle:thin:@192.168.0.1:1521:usera
                  connection.user=usera
                  connection.password=usera
                  auto.create=true
                  table.name.format=FLIGHTS4
                  key.converter=io.confluent.connect.avro.AvroConverter
                  key.converter.schema.registry.url=http://192.168.0.1:8081
                  value.converter=io.confluent.connect.avro.AvroConverter
                  value.converter.schema.registry.url=http://192.168.0.1:8081
                  

                  根據我的架構,我希望插入到我的 Oracle 表中的值是 varchar2.我創建了一個包含 3 個 varchar2 列的表.當我啟動我的連接器時,沒有插入任何東西.然后我刪除了表并在表自動創建模式下運行連接器.那個時候,表被創建并且值被插入.但問題是,列數據類型是 CLOB.我希望它是 varchar2,因為它使用的數據較少.

                  From my schema, I am expecting the values inserted to my Oracle table to be varchar2. I have created a table having 3 varchar2 columns. When i started my connector, nothing got inserted. Then i deleted the table and ran the connector with table auto create mode on. That time, the table got created and values got inserted. But the problem is, the column data type is CLOB. I want it to be varchar2 since it use less data.

                  為什么會發生這種情況,我該如何解決?謝謝你.

                  Why is this happening and how can i fix this? Thank you.

                  推薦答案

                  貌似Kafka的String映射到Oracle的NCLOB:

                  Looks like Kafka's String is mapped to Oracle's NCLOB:

                  <table border="1">
                  <tr>
                  <th>Schema Type</th><th>MySQL</th><th>Oracle</th><th>PostgreSQL</th><th>SQLite</th>
                  </tr>
                  <tr>
                  <td>INT8</td><td>TINYINT</td><td>NUMBER(3,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT16</td><td>SMALLINT</td><td>NUMBER(5,0)</td><td>SMALLINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT32</td><td>INT</td><td>NUMBER(10,0)</td><td>INT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>INT64</td><td>BIGINT</td><td>NUMBER(19,0)</td><td>BIGINT</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>FLOAT32</td><td>FLOAT</td><td>BINARY_FLOAT</td><td>REAL</td><td>REAL</td>
                  </tr>
                  <tr>
                  <td>FLOAT64</td><td>DOUBLE</td><td>BINARY_DOUBLE</td><td>DOUBLE PRECISION</td><td>REAL</td>
                  </tr>
                  <tr>
                  <td>BOOLEAN</td><td>TINYINT</td><td>NUMBER(1,0)</td><td>BOOLEAN</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>STRING</td><td>VARCHAR(256)</td><td>NCLOB</td><td>TEXT</td><td>TEXT</td>
                  </tr>
                  <tr>
                  <td>BYTES</td><td>VARBINARY(1024)</td><td>BLOB</td><td>BYTEA</td><td>BLOB</td>
                  </tr>
                  <tr>
                  <td>'Decimal'</td><td>DECIMAL(65,s)</td><td>NUMBER(*,s)</td><td>DECIMAL</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Date'</td><td>DATE</td><td>DATE</td><td>DATE</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Time'</td><td>TIME(3)</td><td>DATE</td><td>TIME</td><td>NUMERIC</td>
                  </tr>
                  <tr>
                  <td>'Timestamp'</td><td>TIMESTAMP(3)</td><td>TIMESTAMP</td><td>TIMESTAMP</td><td>NUMERIC</td>
                  </tr>
                  </table>

                  來源:https://www.ibm.com/support/knowledgecenter/en/SSPT3X_4.2.5/com.ibm.swg.im.infosphere.biginsights.admin.doc/doc/admin_kafka_jdbc_sink.html

                  https://docs.confluent.io/current/connect/connect-jdbc/docs/sink_connector.html

                  更新

                  OracleDialect 類(https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) 具有硬編碼的 CLOB 值,只需使用您自己的類擴展它,更改映射將無濟于事,因為方言類型是在 CLOB 中的靜態方法中定義的代碼>JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                  OracleDialect class (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/dialect/OracleDialect.java) has hardcoded CLOB value and simply extend it with your own class and change that mapping will not help as type of dialect is defined in static method in JdbcSinkTask (https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/sink/JdbcSinkTask.java)

                  final DbDialect dbDialect = DbDialect.fromConnectionString(config.connectionUrl);
                  

                  這篇關于為什么 Kafka jdbc 將插入數據作為 BLOB 而不是 varchar 連接的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  In Apache Spark 2.0.0, is it possible to fetch a query from an external database (rather than grab the whole table)?(在 Apache Spark 2.0.0 中,是否可以從外部數據庫獲取查詢(而不是獲取整個表)?) - IT屋-程序員軟件開
                  Spark giving Null Pointer Exception while performing jdbc save(Spark在執行jdbc保存時給出空指針異常)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  No suitable driver found for jdbc in Spark(在 Spark 中找不到適合 jdbc 的驅動程序)

                    <tbody id='LnrNt'></tbody>
                  • <tfoot id='LnrNt'></tfoot>

                          <bdo id='LnrNt'></bdo><ul id='LnrNt'></ul>
                          • <legend id='LnrNt'><style id='LnrNt'><dir id='LnrNt'><q id='LnrNt'></q></dir></style></legend>

                            <small id='LnrNt'></small><noframes id='LnrNt'>

                          • <i id='LnrNt'><tr id='LnrNt'><dt id='LnrNt'><q id='LnrNt'><span id='LnrNt'><b id='LnrNt'><form id='LnrNt'><ins id='LnrNt'></ins><ul id='LnrNt'></ul><sub id='LnrNt'></sub></form><legend id='LnrNt'></legend><bdo id='LnrNt'><pre id='LnrNt'><center id='LnrNt'></center></pre></bdo></b><th id='LnrNt'></th></span></q></dt></tr></i><div class="qwawimqqmiuu" id='LnrNt'><tfoot id='LnrNt'></tfoot><dl id='LnrNt'><fieldset id='LnrNt'></fieldset></dl></div>
                            主站蜘蛛池模板: 欧美在线观看一区二区 | 国产精品久久久久久久久久 | 中文天堂网 | 91视频久久| 国产精品呻吟久久av凹凸 | 最新午夜综合福利视频 | www.久草 | 欧美成人精品一区二区男人看 | 欧美激情综合五月色丁香小说 | 91中文字幕 | 天天天天操| 三级黄视频在线观看 | 国产三级在线观看播放 | 国产激情精品一区二区三区 | 欧美日韩中文字幕在线 | 99国产精品99久久久久久 | 高清视频一区二区三区 | 国产欧美精品一区二区三区 | 国产一级毛片视频 | 亚洲 欧美 激情 另类 校园 | 国产福利91精品 | 亚洲国产精久久久久久久 | 黄网站免费入口 | 日本三级网站在线 | 精品久久久久久久久久久久久久久久久 | 成年人黄色一级片 | 久久久久久久一级 | 一区二区不卡高清 | 亚洲一区国产 | 99成人免费视频 | 欧美激情视频一区二区三区在线播放 | 国产专区视频 | xxxxx免费视频 | 国产在线精品一区二区三区 | 亚洲综合大片69999 | 在线小视频 | 国产精品视频免费看 | 欧美日韩中文在线 | 色综合美女 | 欧美国产日韩一区二区三区 | 美国一级黄色片 |