Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-5205

debezium_signal schema issue?

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Major Major
    • None
    • 1.9.3.Final
    • None
    • False
    • None
    • False

      my schema registry had this structure before (i think it was because i did a execute-snapshot record into the signal table months ago when i was on debezium 1.7 but i since deleted the DEBEZIUM_SIGNAL topic afterwards):

      {
        "type": "record",
        "name": "Key",
        "namespace": "my_topicoracle.C__DBZUSER.DEBEZIUM_SIGNAL",
        "fields": [
          {
            "name": "id",
            "type": [
              "null",
              "string"
            ],
            "default": null
          },
          {
            "name": "type",
            "type": "string"
          },
          {
            "name": "data",
            "type": [
              "null",
              "string"
            ],
            "default": null
          }
        ],
        "connect.name": "my_topicoracle.C__DBZUSER.DEBEZIUM_SIGNAL.Key"
      } 

      then i tried to insert this row in the oracle db:

      insert into c##dbzuser.debezium_signal values ('924e3ff8-2245-43ca-ba77-2af9af02fa07','log','{"message": "Signal message at offset {}"}'); commit;

       

      the 1.9.3 debezium connector errored out with this:

      [2022-06-05 00:08:33,802] INFO WorkerSourceTask{id=kafka-connect-src-01-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
      [2022-06-05 00:08:42,826] INFO 1 records sent during previous 10:09:58.443, last recorded offset: {commit_scn=524571042553, transaction_id=null, snapshot_pending_tx=440015005a3d2800:524567461263,5d002000b50b2800:524567462189, snapshot_scn=524567462190, scn=524571041692} (io.debezium.connector.common.BaseSourceTask)
      [2022-06-05 00:08:42,885] INFO WorkerSourceTask{id=kafka-connect-src-01-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
      [2022-06-05 00:08:42,886] ERROR WorkerSourceTask{id=kafka-connect-src-01-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
      org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:318)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:347)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:261)
              at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
              at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:237)
              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:829)
      Caused by: org.apache.kafka.common.config.ConfigException: Failed to access Avro data from topic my-topicoracle.C__DBZUSER.DEBEZIUM_SIGNAL : Schema being registered is incompatible with an earlier schema for subject "my-topicoracle.C__DBZUSER.DEBEZIUM_SIGNAL-key"; error code: 409
              at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:98)
              at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
              at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$2(WorkerSourceTask.java:318)
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
              at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
              ... 11 more
      [2022-06-05 00:08:42,888] INFO Stopping down connector (io.debezium.connector.common.BaseSourceTask)
      [2022-06-05 00:08:43,802] INFO WorkerSourceTask{id=kafka-connect-src-01-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
      [2022-06-05 00:08:43,802] WARN Couldn't commit processed log positions with the source database due to a concurrent connector shutdown or restart (io.debezium.connector.common.BaseSourceTask)
      [2022-06-05 00:08:45,472] INFO startScn=524571042553, endScn=524571042574 (io.debezium.connector.oracle.logminer.LogMinerStreamingChangeEventSource) 

       

      then i deleted the key&value from schema registry and ran: update c##dbzuser.debezium_signal set id = '924e3ff8-2245-43ca-ba77-2af9af02fa05' and restarted debezium connector then i checked schema registry and saw the key was different as:

      {
        "type": "record",
        "name": "Key",
        "namespace": "my_topicoracle.C__DBZUSER.DEBEZIUM_SIGNAL",
        "fields": [
          {
            "name": "id",
            "type": "string"
          },
          {
            "name": "type",
            "type": "string"
          },
          {
            "name": "data",
            "type": [
              "null",
              "string"
            ],
            "default": null
          }
        ],
        "connect.name": "my_topicoracle.C__DBZUSER.DEBEZIUM_SIGNAL.Key"
      } 

       

            Unassigned Unassigned
            tooptoop toop toop (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: