Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-8668

Postgres Connector fails to retrieve schema for new tables

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • 3.1.0.CR1
    • None
    • postgresql-connector
    • None
    • 3
    • False
    • None
    • False

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      3.1

      What is the connector configuration?

       

      {
        "name": "connector",
        "config": {
          "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
          "database.hostname": "host.docker.internal",
          "database.port": "5453",
          "database.user": "",
          "database.password": "",
          "database.dbname": "users",
          "plugin.name": "pgoutput",
          "topic.prefix": "users",
          "tombstones.on.delete": "false",
          "signal.data.collection": "public.debezium_signals",
          "name": "connector"
        }
      } 

       

       

      What is the captured database version and mode of deployment?

      Postgres 17, running in a docker container on a local Windows machine.

      SELECT VERSION() results in the following:

       

      PostgreSQL 17.0 (Debian 17.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

       

      What behavior do you expect?

      Restarted my connector with: HTTP PUT "localhost:8083/connectors/connector/config" with the debezium config json above

      Created a new table called `new3` under the `public` schema using pgAdmin.

      Executed the following dbz signal should add new table `public.new3` to my existing connectors CDC table list:

      INSERT INTO public.debezium_signals(id, type, data)	VALUES (gen_random_uuid (), 'execute-snapshot', '{"data-collections":["public.new3"]}, "type": "incremental"'); 

       

      What behavior do you see?

      I get this debezium error after executing the above dbz signal. Also tried creating new tables and sending a signal for the new table, which results in the same errors.

      2025-02-06 21:13:23 2025-02-07 05:13:23,383 INFO   Postgres|users|streaming  Requested 'INCREMENTAL' snapshot of data collections '[public.new3]' with additional conditions '[]' and surrogate key 'PK of table will be used'   [io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot]
      2025-02-06 21:13:23 2025-02-07 05:13:23,404 INFO   Postgres|users|streaming  Schema not found for table 'public.new3', known tables [public.users, public.role_assignment_teams, public.role_assignments, public.role_assignment_users, public.role_assignment_orgs, public.roles, public.debezium_signals, public.team_users, public.new1, public.__EFMigrationsHistory, public.new2, public.teams, public.orgs]. Will attempt to retrieve this schema   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
      2025-02-06 21:13:23 2025-02-07 05:13:23,414 WARN   Postgres|users|streaming  Failed to retrieve schema for public.new3   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
      2025-02-06 21:13:23 java.lang.NullPointerException: database must not be null
      2025-02-06 21:13:23     at java.base/java.util.Objects.requireNonNull(Objects.java:259)
      2025-02-06 21:13:23     at io.debezium.schema.SchemaChangeEvent.<init>(SchemaChangeEvent.java:54)
      2025-02-06 21:13:23     at io.debezium.schema.SchemaChangeEvent.<init>(SchemaChangeEvent.java:45)
      2025-02-06 21:13:23     at io.debezium.schema.SchemaChangeEvent.of(SchemaChangeEvent.java:183)
      2025-02-06 21:13:23     at io.debezium.schema.SchemaChangeEvent.ofCreate(SchemaChangeEvent.java:270)
      2025-02-06 21:13:23     at io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource.createAndDispatchSchemaChangeEvent(AbstractIncrementalSnapshotChangeEventSource.java:778)
      2025-02-06 21:13:23     at io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource.retrieveAndRefreshSchema(AbstractIncrementalSnapshotChangeEventSource.java:396)
      2025-02-06 21:13:23     at io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource.isTableInvalid(AbstractIncrementalSnapshotChangeEventSource.java:365)
      2025-02-06 21:13:23     at io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource.readChunk(AbstractIncrementalSnapshotChangeEventSource.java:266)
      2025-02-06 21:13:23     at io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource.addDataCollectionNamesToSnapshot(AbstractIncrementalSnapshotChangeEventSource.java:490)
      2025-02-06 21:13:23     at io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot.arrived(ExecuteSnapshot.java:78)
      2025-02-06 21:13:23     at io.debezium.pipeline.signal.SignalProcessor.processSignal(SignalProcessor.java:191)
      2025-02-06 21:13:23     at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
      2025-02-06 21:13:23     at java.base/java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722)
      2025-02-06 21:13:23     at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762)
      2025-02-06 21:13:23     at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:276)
      2025-02-06 21:13:23     at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197)
      2025-02-06 21:13:23     at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:179)
      2025-02-06 21:13:23     at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
      2025-02-06 21:13:23     at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
      2025-02-06 21:13:23     at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
      2025-02-06 21:13:23     at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
      2025-02-06 21:13:23     at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
      2025-02-06 21:13:23     at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
      2025-02-06 21:13:23     at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
      2025-02-06 21:13:23     at io.debezium.pipeline.signal.SignalProcessor.lambda$processSourceSignal$4(SignalProcessor.java:155)
      2025-02-06 21:13:23     at io.debezium.pipeline.signal.SignalProcessor.executeWithSemaphore(SignalProcessor.java:165)
      2025-02-06 21:13:23     at io.debezium.pipeline.signal.SignalProcessor.processSourceSignal(SignalProcessor.java:149)
      2025-02-06 21:13:23     at io.debezium.pipeline.EventDispatcher$2.changeRecord(EventDispatcher.java:299)
      2025-02-06 21:13:23     at io.debezium.relational.RelationalChangeRecordEmitter.emitCreateRecord(RelationalChangeRecordEmitter.java:79)
      2025-02-06 21:13:23     at io.debezium.relational.RelationalChangeRecordEmitter.emitChangeRecords(RelationalChangeRecordEmitter.java:47)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.PostgresChangeRecordEmitter.emitChangeRecords(PostgresChangeRecordEmitter.java:94)
      2025-02-06 21:13:23     at io.debezium.pipeline.EventDispatcher.dispatchDataChangeEvent(EventDispatcher.java:280)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.processReplicationMessages(PostgresStreamingChangeEventSource.java:333)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.lambda$processMessages$0(PostgresStreamingChangeEventSource.java:232)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.connection.pgoutput.PgOutputMessageDecoder.decodeInsert(PgOutputMessageDecoder.java:432)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.connection.pgoutput.PgOutputMessageDecoder.processNotEmptyMessage(PgOutputMessageDecoder.java:208)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.connection.AbstractMessageDecoder.processMessage(AbstractMessageDecoder.java:41)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.deserializeMessages(PostgresReplicationConnection.java:723)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.readPending(PostgresReplicationConnection.java:715)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.processMessages(PostgresStreamingChangeEventSource.java:232)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:183)
      2025-02-06 21:13:23     at io.debezium.connector.postgresql.PostgresStreamingChangeEventSource.execute(PostgresStreamingChangeEventSource.java:42)
      2025-02-06 21:13:23     at io.debezium.pipeline.ChangeEventSourceCoordinator.streamEvents(ChangeEventSourceCoordinator.java:324)
      2025-02-06 21:13:23     at io.debezium.pipeline.ChangeEventSourceCoordinator.executeChangeEventSources(ChangeEventSourceCoordinator.java:203)
      2025-02-06 21:13:23     at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$start$0(ChangeEventSourceCoordinator.java:143)
      2025-02-06 21:13:23     at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
      2025-02-06 21:13:23     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
      2025-02-06 21:13:23     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
      2025-02-06 21:13:23     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
      2025-02-06 21:13:23     at java.base/java.lang.Thread.run(Thread.java:1583)
      2025-02-06 21:13:23 2025-02-07 05:13:23,414 WARN   Postgres|users|streaming  Schema retrieval failed due to an exception for public.new3   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
      2025-02-06 21:13:23 2025-02-07 05:13:23,418 INFO   Postgres|users|streaming  Skipping read chunk because snapshot is not running   [io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource]
      2025-02-06 21:13:45 2025-02-07 05:13:45,433 INFO   ||  WorkerSourceTask{id=user-management-connector-0} Committing offsets for 5 acknowledged messages   [org.apache.kafka.connect.runtime.WorkerSourceTask]
       

       

      Do you see the same behaviour using the latest released Debezium version?

      3.1 is the latest 

      Do you have the connector logs, ideally from start till finish?

      Yes

      How to reproduce the issue using our tutorial deployment?

      The following docker compose was used with the dbz configs above, with Postgres 17.

      I restarted my dbz connector, and used pgAdmin to create a new test table under the public schema named `new3`. I then executed the dbz signal insert query above, which resulted in the failing logs.

       

      Created the public.debezium_signals table with:

       

      CREATE TABLE IF NOT EXISTS public.debezium_signals
      (
          id character varying(200) COLLATE pg_catalog."default" NOT NULL,
          type character varying(200) COLLATE pg_catalog."default" NOT NULL,
          data character varying(2048) COLLATE pg_catalog."default",
          CONSTRAINT pk_debezium_signals PRIMARY KEY (id)
      ) 

       

      Created the public.new3 table with:

       

      CREATE TABLE IF NOT EXISTS public.new3
      (
          new3 bigint NOT NULL,
          val bigint,     
          CONSTRAINT new3_pkey PRIMARY KEY (new3)
      )

      docker-compose.yml

       

      services:
        zookeeper:
          image: confluentinc/cp-zookeeper:latest
          hostname: zookeeper
          container_name: zookeeper
          ports:
            - "2182:2181"
          environment:
            ZOOKEEPER_CLIENT_PORT: 2181
            ZOOKEEPER_TICK_TIME: 2000  
      
      kafka:
          image: confluentinc/cp-kafka:latest
          ports:
            - "9094:9092"
            - "29092:29092"
          environment:
            KAFKA_BROKER_ID: 1
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
            KAFKA_LISTENERS: LISTENER_INTERNAL://kafka:29092,LISTENER_EXTERNAL://kafka:9092
            KAFKA_ADVERTISED_LISTENERS: LISTENER_INTERNAL://kafka:29092,LISTENER_EXTERNAL://localhost:9092
            KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_INTERNAL:PLAINTEXT,LISTENER_EXTERNAL:PLAINTEXT
            KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_INTERNAL
            KAFKA_DEFAULT_REPLICATION_FACTOR: 1
            KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
            KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
          depends_on:
            [ zookeeper ]  
      
      debezium-connect:
          image: quay.io/debezium/connect:3.1
          platform: linux/amd64
          ports:
            - "8083:8083"
          environment:
            CONFIG_STORAGE_TOPIC: my_connect_configs
            OFFSET_STORAGE_TOPIC: my_connect_offsets
            STATUS_STORAGE_TOPIC: my_connect_statuses
            BOOTSTRAP_SERVERS: kafka:29092
            GROUP_ID: 1
          depends_on:
            [ kafka ]
       

       

       

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      <Your answer>

              Unassigned Unassigned
              liamdev1 Liam Wu (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: