Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-2677

SQLException: Stream has already been closed during Schema Read

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Critical Critical
    • None
    • 1.3.0.Final
    • oracle-connector
    • None
    • False
    • False
    • Undefined
    • Hide

      Here is the connector :
      ###
      POST http://connect.data.k8s.kod.kyriba.com/connectors
      Content-Type: application/json

      {
      "name": "usdemo-connector",
      "config": {
      "connector.class": "io.debezium.connector.oracle.OracleConnector",
      "database.user": "admin",
      "database.dbname": "ORCL",
      "database.schema": "DEMO",
      "tasks.max": "1",
      "database.hostname": "xxx.rds.amazonaws.com",
      "database.password": "xxx",
      "database.history.kafka.bootstrap.servers": "ks-events-cp-kafka:9092",
      "database.history.kafka.topic": "schema-changes.inventory",
      "database.server.name": "usdemo",
      "database.out.server.name": "dbzxout",
      "database.port": "1521",
      "database.tablename.case.insensitive": "true",
      "database.oracle.version": "11",
      "table.include.list": "DEMO.TEST",
      "database.connection.adapter": "logminer"
      }
      }
       
      here are the DDLs for this table :
       
      CREATE TABLE "DEMO"."TEST"

         (

         "IS_RECONCILE" NUMBER(1,0) DEFAULT 1 NOT NULL ENABLE,

      "COMPANY_ACCOUNT_ID"NUMBER(38,0) NOT NULL ENABLE,

      "ACCOUNT_DESCR_PLAIN"VARCHAR2(150 CHAR),

      "INITIAL_ACCOUNTING_BAL_DATE"DATE,

      "MEMO"VARCHAR2(1000 CHAR),

      "ACCOUNT_DESCR"RAW(608),

      "UUID" RAW(16) DEFAULT SYS_GUID() NOT NULL ENABLE

      )

      Show
      Here is the connector : ### POST http://connect.data.k8s.kod.kyriba.com/connectors Content-Type: application/json { "name": "usdemo-connector", "config": { "connector.class": "io.debezium.connector.oracle.OracleConnector", "database.user": "admin", "database.dbname": "ORCL", "database.schema": "DEMO", "tasks.max": "1", "database.hostname": "xxx.rds.amazonaws.com", "database.password": "xxx", "database.history.kafka.bootstrap.servers": "ks-events-cp-kafka:9092", "database.history.kafka.topic": "schema-changes.inventory", "database.server.name": "usdemo", "database.out.server.name": "dbzxout", "database.port": "1521", "database.tablename.case.insensitive": "true", "database.oracle.version": "11", "table.include.list": "DEMO.TEST", "database.connection.adapter": "logminer" } }   here are the DDLs for this table :   CREATE TABLE "DEMO"."TEST"    (    "IS_RECONCILE" NUMBER (1,0) DEFAULT 1 NOT NULL ENABLE, "COMPANY_ACCOUNT_ID" NUMBER (38,0) NOT NULL ENABLE, "ACCOUNT_DESCR_PLAIN" VARCHAR2 (150 CHAR ), "INITIAL_ACCOUNTING_BAL_DATE" DATE , "MEMO" VARCHAR2 (1000 CHAR ), "ACCOUNT_DESCR" RAW (608), "UUID" RAW (16) DEFAULT SYS_GUID () NOT NULL ENABLE )

      I'm getting this exception while using 1.3.0. This is happening on either xstream or logminer.

       

      Here is the exception:

      org.apache.kafka.connect.errors.ConnectException: An exception occurred in the change event producer. This connector will be stopped. at io.debezium.pipeline.ErrorHandler.setProducerThrowable(ErrorHandler.java:42) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:121) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: java.sql.SQLException: Stream has already been closed at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:76) at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:105) ... 5 more Caused by: java.sql.SQLException: Stream has already been closed at oracle.jdbc.driver.LongAccessor.getBytesInternal(LongAccessor.java:129) at oracle.jdbc.driver.T2CLongAccessor.getBytesInternal(T2CLongAccessor.java:72) at oracle.jdbc.driver.Accessor.getBytes(Accessor.java:940) at oracle.jdbc.driver.LongAccessor.getString(LongAccessor.java:156) at oracle.jdbc.driver.GeneratedStatement.getString(GeneratedStatement.java:289) at oracle.jdbc.driver.GeneratedScrollableResultSet.getString(GeneratedScrollableResultSet.java:376) at io.debezium.jdbc.JdbcConnection.readTableColumn(JdbcConnection.java:1179) at io.debezium.connector.oracle.OracleConnection.readTableColumn(OracleConnection.java:154) at io.debezium.jdbc.JdbcConnection.readSchema(JdbcConnection.java:1126) at io.debezium.connector.oracle.OracleConnection.readSchema(OracleConnection.java:216) at io.debezium.connector.oracle.OracleSnapshotChangeEventSource.readTableStructure(OracleSnapshotChangeEventSource.java:218) at io.debezium.relational.RelationalSnapshotChangeEventSource.doExecute(RelationalSnapshotChangeEventSource.java:122) at io.debezium.pipeline.source.AbstractSnapshotChangeEventSource.execute(AbstractSnapshotChangeEventSource.java:63) ... 6 more
       

              ccranfor@redhat.com Chris Cranford
              ludovic-le-blay Ludovic Le Blay (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: