Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-9494

Adhoc blocking snapshot can leave streaming paused forever if additional-conditions is bad

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 3.3.0.Final
    • None
    • debezium-core
    • None

      In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.

      Bug report

      For bug reports, provide this information, please:

      What Debezium connector do you use and what version?

      I have tested both Oracle and SQL Server connectors. I use v3.1.2 but have tested on 3.2 and 3.3 and seeing the same behaviour.

      What is the connector configuration?

      Pasting the one for SQL Server : 

      {
        "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
        "database.hostname": "x.x.x.x",
        "database.port": "1433",
        "database.user": "xxx",
        "database.password": "xxxxx",
        "database.names": "xxx",
        "topic.prefix": "test2",
        "tasks.max": "1",
        "schema.history.internal.kafka.bootstrap.servers": "localhost:9092",
        "schema.history.internal.kafka.topic": "history2",
        "table.include.list": "dbo.customers",
        "signal.data.collection": "bgoyal_test.dbo.dbz_signaling",
        "key.converter": "io.confluent.connect.avro.AvroConverter",
        "value.converter": "io.confluent.connect.avro.AvroConverter",
        "key.converter.schema.registry.url": "http://localhost:8081",
        "value.converter.schema.registry.url": "http://localhost:8081",
        "confluent.topic.bootstrap.servers": "localhost:9092",
        "snapshot.mode": "no_data",
        "database.encrypt": "false",
        "schema.history.internal.store.only.captured.tables.ddl": true,
        "internal.log.position.check.enable": "false"
      } 

       

      What is the captured database version and mode of deployment?

      This is on-prem deployment

      What behavior do you expect?

      I expect the streaming to resume regardless of blocking snapshot completes or fails. 

      What behavior do you see?

      There is a case where streaming remains in pause state though the blocking snapshot has failed.

      Do you see the same behaviour using the latest released Debezium version?

      Yes. I used 3.3.

      Do you have the connector logs, ideally from start till finish?

       

      [2025-09-23 12:11:20,229] INFO [test_v3_2|task-0] Connected metrics set to 'true' (io.debezium.pipeline.ChangeEventSourceCoordinator:492)
      [2025-09-23 12:11:20,231] INFO [test_v3_2|task-0] No incremental snapshot in progress, no action needed on start (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:228)
      [2025-09-23 12:11:20,233] INFO [test_v3_2|task-0] SignalProcessor started. Scheduling it every 5000ms (io.debezium.pipeline.signal.SignalProcessor:100)
      [2025-09-23 12:11:20,233] INFO [test_v3_2|task-0] Creating thread debezium-sqlserverconnector-test2-SignalProcessor (io.debezium.util.Threads:290)
      [2025-09-23 12:11:20,234] INFO [test_v3_2|task-0] Starting streaming (io.debezium.connector.sqlserver.SqlServerChangeEventSourceCoordinator:109)
      [2025-09-23 12:11:20,235] INFO [test_v3_2|task-0] Last position recorded in offsets is 000004e7:00004780:0003(000004e7:00004780:0002,0)[1] (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:166)
      [2025-09-23 12:11:21,385] INFO 127.0.0.1 - - [23/Sept/2025:06:41:21 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 62 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:11:21,575] INFO [test_v3_2|worker] Requested thread factory for component JdbcConnection, id = JdbcConnection named = jdbc-connection-close (io.debezium.util.Threads:273)
      [2025-09-23 12:11:21,575] INFO [test_v3_2|worker] Creating thread debezium-jdbcconnection-JdbcConnection-jdbc-connection-close (io.debezium.util.Threads:290)
      [2025-09-23 12:11:21,576] INFO [test_v3_2|worker] Connection gracefully closed (io.debezium.jdbc.JdbcConnection:988)
      [2025-09-23 12:11:22,866] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_MET_Romania_Taget1" [sourceTableId=bgoyal_test.dbo.MET Romania$Taget1, changeTableId=bgoyal_test.cdc.dbo_MET_Romania_Taget1_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=370100359, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,868] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_test_heartbeat_table" [sourceTableId=bgoyal_test.dbo.test_heartbeat_table, changeTableId=bgoyal_test.cdc.dbo_test_heartbeat_table_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=674101442, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,868] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_dummy_heartbeat_table" [sourceTableId=bgoyal_test.dbo.dummy_heartbeat_table, changeTableId=bgoyal_test.cdc.dbo_dummy_heartbeat_table_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=770101784, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_Documents" [sourceTableId=bgoyal_test.dbo.Documents, changeTableId=bgoyal_test.cdc.dbo_Documents_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=898102240, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_demo_heartbeat_table" [sourceTableId=bgoyal_test.dbo.demo_heartbeat_table, changeTableId=bgoyal_test.cdc.dbo_demo_heartbeat_table_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=1314103722, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_HUGEDATA" [sourceTableId=bgoyal_test.dbo.HUGEDATA, changeTableId=bgoyal_test.cdc.dbo_HUGEDATA_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=1333579789, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_heartbeat_table_prod" [sourceTableId=bgoyal_test.dbo.heartbeat_table_prod, changeTableId=bgoyal_test.cdc.dbo_heartbeat_table_prod_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=1442104178, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_Case4" [sourceTableId=bgoyal_test.dbo.Case4, changeTableId=bgoyal_test.cdc.dbo_Case4_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=1461580245, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_nullAndTimeCase" [sourceTableId=bgoyal_test.dbo.nullAndTimeCase, changeTableId=bgoyal_test.cdc.dbo_nullAndTimeCase_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=1605580758, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:22,869] INFO [test_v3_2|task-0] CDC is enabled for table Capture instance "dbo_MET_Romania_Taget" [sourceTableId=bgoyal_test.dbo.MET Romania$Taget, changeTableId=bgoyal_test.cdc.dbo_MET_Romania_Taget_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=2101582525, stopLsn=NULL] but the table is not on connector's table include list (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:445)
      [2025-09-23 12:11:23,428] INFO [test_v3_2|task-0] Skipping change ChangeTableResultSet{changeTable=Capture instance "dbo_dbz_signaling" [sourceTableId=bgoyal_test.dbo.dbz_signaling, changeTableId=bgoyal_test.cdc.dbo_dbz_signaling_CT, startLsn=000004e7:00004640:0003, changeTableObjectId=1877581727, stopLsn=NULL], resultSet=SQLServerResultSet:6, completed=false, currentChangePosition=000004e7:00004780:0003(000004e7:00004780:0002,2)} as its order in the transaction 1 is smaller than or equal to the last recorded operation 000004e7:00004780:0003(000004e7:00004780:0002,0)[1] (io.debezium.connector.sqlserver.SqlServerStreamingChangeEventSource:292)
      [2025-09-23 12:11:39,124] INFO 127.0.0.1 - - [23/Sept/2025:06:41:39 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 9 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:12:01,603] INFO 127.0.0.1 - - [23/Sept/2025:06:42:01 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 8 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:12:21,548] INFO 127.0.0.1 - - [23/Sept/2025:06:42:21 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 8 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:12:44,099] INFO 127.0.0.1 - - [23/Sept/2025:06:42:44 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 10 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:13:03,524] INFO 127.0.0.1 - - [23/Sept/2025:06:43:03 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 5 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:13:21,700] INFO 127.0.0.1 - - [23/Sept/2025:06:43:21 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 8 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:13:40,309] INFO 127.0.0.1 - - [23/Sept/2025:06:43:40 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:04,111] INFO 127.0.0.1 - - [23/Sept/2025:06:44:04 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 11 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:28,026] INFO 127.0.0.1 - - [23/Sept/2025:06:44:28 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 6 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:41,560] INFO [test_v3_2|task-0] Requested 'BLOCKING' snapshot of data collections '[bgoyal_test.dbo.CUSTOMERS]' with additional conditions '[AdditionalCondition\{dataCollection=bgoyal_test.dbo.CUSTOMERS, filter='null'}]' and surrogate key 'PK of table will be used' (io.debezium.pipeline.signal.actions.snapshotting.ExecuteSnapshot:64)
      [2025-09-23 12:14:41,561] INFO [test_v3_2|task-0] Creating thread debezium-sqlserverconnector-test2-blocking-snapshot (io.debezium.util.Threads:290)
      [2025-09-23 12:14:41,604] INFO [test_v3_2|task-0] 1 records sent during previous 00:03:22.181, last recorded offset of {server=test2, database=bgoyal_test} partition is {event_serial_no=1, commit_lsn=000004e7:00004a98:0003, change_lsn=000004e7:00004a98:0002} (io.debezium.connector.common.BaseSourceTask:368)
      [2025-09-23 12:14:41,712] INFO [test_v3_2|task-0] creating interceptor (io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor:74)
      [2025-09-23 12:14:41,728] INFO [test_v3_2|task-0] MonitoringInterceptorConfig values:
      confluent.monitoring.interceptor.publishMs = 15000
      confluent.monitoring.interceptor.topic = _confluent-monitoring
      (io.confluent.monitoring.clients.interceptor.MonitoringInterceptorConfig:375)
      [2025-09-23 12:14:41,738] INFO [test_v3_2|task-0] ProducerConfig values:
      acks = -1
      auto.include.jmx.reporter = true
      batch.size = 16384
      bootstrap.servers = [localhost:9092]
      buffer.memory = 33554432
      client.dns.lookup = use_all_dns_ips
      client.id = confluent.monitoring.interceptor.connector-producer-test_v3_2-0
      compression.type = zstd
      confluent.lkc.id = null
      confluent.proxy.protocol.client.address = null
      confluent.proxy.protocol.client.mode = PROXY
      confluent.proxy.protocol.client.port = null
      confluent.proxy.protocol.client.version = NONE
      connections.max.idle.ms = 540000
      delivery.timeout.ms = 120000
      enable.idempotence = true
      enable.metrics.push = true
      interceptor.classes = []
      key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
      linger.ms = 500
      max.block.ms = 60000
      max.in.flight.requests.per.connection = 1
      max.request.size = 10485760
      metadata.max.age.ms = 300000
      metadata.max.idle.ms = 300000
      metric.reporters = []
      metrics.num.samples = 2
      metrics.recording.level = INFO
      metrics.sample.window.ms = 30000
      partitioner.adaptive.partitioning.enable = true
      partitioner.availability.timeout.ms = 0
      partitioner.class = null
      partitioner.ignore.keys = false
      receive.buffer.bytes = 32768
      reconnect.backoff.max.ms = 1000
      reconnect.backoff.ms = 50
      request.timeout.ms = 30000
      retries = 2147483647
      retry.backoff.max.ms = 1000
      retry.backoff.ms = 500
      sasl.client.callback.handler.class = null
      sasl.jaas.config = null
      sasl.kerberos.kinit.cmd = /usr/bin/kinit
      sasl.kerberos.min.time.before.relogin = 60000
      sasl.kerberos.service.name = null
      sasl.kerberos.ticket.renew.jitter = 0.05
      sasl.kerberos.ticket.renew.window.factor = 0.8
      sasl.login.callback.handler.class = null
      sasl.login.class = null
      sasl.login.connect.timeout.ms = null
      sasl.login.read.timeout.ms = null
      sasl.login.refresh.buffer.seconds = 300
      sasl.login.refresh.min.period.seconds = 60
      sasl.login.refresh.window.factor = 0.8
      sasl.login.refresh.window.jitter = 0.05
      sasl.login.retry.backoff.max.ms = 10000
      sasl.login.retry.backoff.ms = 100
      sasl.mechanism = GSSAPI
      sasl.oauthbearer.clock.skew.seconds = 30
      sasl.oauthbearer.expected.audience = null
      sasl.oauthbearer.expected.issuer = null
      sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
      sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
      sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
      sasl.oauthbearer.jwks.endpoint.url = null
      sasl.oauthbearer.scope.claim.name = scope
      sasl.oauthbearer.sub.claim.name = sub
      sasl.oauthbearer.token.endpoint.url = null
      security.protocol = PLAINTEXT
      security.providers = null
      send.buffer.bytes = 131072
      socket.connection.setup.timeout.max.ms = 30000
      socket.connection.setup.timeout.ms = 10000
      ssl.cipher.suites = null
      ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
      ssl.endpoint.identification.algorithm = https
      ssl.engine.factory.class = null
      ssl.key.password = null
      ssl.keymanager.algorithm = SunX509
      ssl.keystore.certificate.chain = null
      ssl.keystore.key = null
      ssl.keystore.location = null
      ssl.keystore.password = null
      ssl.keystore.type = JKS
      ssl.protocol = TLSv1.3
      ssl.provider = null
      ssl.secure.random.implementation = null
      ssl.trustmanager.algorithm = PKIX
      ssl.truststore.certificates = null
      ssl.truststore.location = null
      ssl.truststore.password = null
      ssl.truststore.type = JKS
      transaction.timeout.ms = 60000
      transactional.id = null
      value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
      (org.apache.kafka.clients.producer.ProducerConfig:375)
      [2025-09-23 12:14:41,739] INFO [test_v3_2|task-0] initializing Kafka metrics collector (org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector:297)
      [2025-09-23 12:14:41,743] INFO [test_v3_2|task-0] [Producer clientId=confluent.monitoring.interceptor.connector-producer-test_v3_2-0] Instantiated an idempotent producer. (org.apache.kafka.clients.producer.KafkaProducer:598)
      [2025-09-23 12:14:41,745] INFO [test_v3_2|task-0] Kafka version: 7.7.0-ce (org.apache.kafka.common.utils.AppInfoParser:131)
      [2025-09-23 12:14:41,745] INFO [test_v3_2|task-0] Kafka commitId: 7003f3d306710734 (org.apache.kafka.common.utils.AppInfoParser:132)
      [2025-09-23 12:14:41,745] INFO [test_v3_2|task-0] Kafka startTimeMs: 1758609881745 (org.apache.kafka.common.utils.AppInfoParser:133)
      [2025-09-23 12:14:41,745] INFO [test_v3_2|task-0] interceptor=confluent.monitoring.interceptor.connector-producer-test_v3_2-0 created for client_id=connector-producer-test_v3_2-0 client_type=PRODUCER session= cluster=fCabWSB_R5mVH7YVvleigQ (io.confluent.monitoring.clients.interceptor.MonitoringInterceptor:153)
      [2025-09-23 12:14:41,748] INFO [test_v3_2|task-0] [Producer clientId=confluent.monitoring.interceptor.connector-producer-test_v3_2-0] Cluster ID: fCabWSB_R5mVH7YVvleigQ (org.apache.kafka.clients.Metadata:356)
      [2025-09-23 12:14:41,749] INFO [test_v3_2|task-0] [Producer clientId=confluent.monitoring.interceptor.connector-producer-test_v3_2-0] ProducerId set to 25 with epoch 0 (org.apache.kafka.clients.producer.internals.TransactionManager:502)
      [2025-09-23 12:14:41,824] INFO [test_v3_2|task-0] Streaming will now pause (io.debezium.connector.sqlserver.SqlServerChangeEventSourceCoordinator:133)
      [2025-09-23 12:14:41,824] INFO [test_v3_2|task-0] Starting snapshot (io.debezium.pipeline.ChangeEventSourceCoordinator:251)
      [2025-09-23 12:14:44,632] INFO 127.0.0.1 - - [23/Sept/2025:06:44:44 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:51,518] INFO 127.0.0.1 - - [23/Sept/2025:06:44:51 +0000] "GET /connectors/test_v3_2/status HTTP/1.1" 200 163 "http://localhost:9021/clusters/fCabWSB_R5mVH7YVvleigQ/management/connect/connect-default/connectors/sources/test_v3_2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/140.0.0.0 Safari/537.36" 29 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:51,519] INFO 127.0.0.1 - - [23/Sept/2025:06:44:51 +0000] "GET /connectors/test_v3_2 HTTP/1.1" 200 1045 "http://localhost:9021/clusters/fCabWSB_R5mVH7YVvleigQ/management/connect/connect-default/connectors/sources/test_v3_2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/140.0.0.0 Safari/537.36" 30 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:51,523] INFO 127.0.0.1 - - [23/Sept/2025:06:44:51 +0000] "GET /connectors/test_v3_2/status HTTP/1.1" 200 163 "http://localhost:9021/clusters/fCabWSB_R5mVH7YVvleigQ/management/connect/connect-default/connectors/sources/test_v3_2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/140.0.0.0 Safari/537.36" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:14:51,523] INFO 127.0.0.1 - - [23/Sept/2025:06:44:51 +0000] "GET /connectors/test_v3_2 HTTP/1.1" 200 1045 "http://localhost:9021/clusters/fCabWSB_R5mVH7YVvleigQ/management/connect/connect-default/connectors/sources/test_v3_2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/140.0.0.0 Safari/537.36" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:15:02,806] INFO 127.0.0.1 - - [23/Sept/2025:06:45:02 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 7 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:15:19,622] INFO [test_v3_2|task-0|offsets] WorkerSourceTask{id=test_v3_2-0} Committing offsets for 1 acknowledged messages (org.apache.kafka.connect.runtime.WorkerSourceTask:233)
      [2025-09-23 12:15:20,516] INFO 127.0.0.1 - - [23/Sept/2025:06:45:20 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 2 (org.apache.kafka.connect.runtime.rest.RestServer:62)
      [2025-09-23 12:15:43,658] INFO 127.0.0.1 - - [23/Sept/2025:06:45:43 +0000] "GET /v1/metadata/id HTTP/1.1" 200 119 "-" "armeria/1.24.3" 3 (org.apache.kafka.connect.runtime.rest.RestServer:62) 

       

      How to reproduce the issue using our tutorial deployment?

      1. Start any connector (SQL Server example) with signalling configured. I have used source signalling in this example with my signal table named as 'bgoyal_test.dbo.dbz_signaling'.
      2. Insert a signal into the table once the connector reaches streaming phase
      3. INSERT INTO dbz_signaling (id, type, data)
        VALUES (
            'new-delay7',
            'execute-snapshot',
            '{
                "type": "blocking",
                "data-collections": [
                    "bgoyal_test.dbo.CUSTOMERS"
                ],
                "additional-conditions": [
                    {
                        "data-collection": "bgoyal_test.dbo.CUSTOMERS"
                    }
                ]
            }'
        ); 
      1. Observe the logs stuck at `Starting snapshot` . 
         

      This happens as the `filter` is null and the blocking-snapshot encounters NPE. Because the failure happens after streaming pause but outside of the finally block which resumes streaming, the streaming remains in infinite pause. The error which the blocking snapshot thread encounters is 

      java.lang.NullPointerException
      at java.base/java.util.Objects.requireNonNull(Objects.java:233)
      at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
      at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
      at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1708)
      at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
      at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
      at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
      at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
      at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
      at io.debezium.relational.RelationalSnapshotChangeEventSource.getBlockingSnapshottingTask(RelationalSnapshotChangeEventSource.java:263)
      at io.debezium.pipeline.ChangeEventSourceCoordinator.lambda$doBlockingSnapshot$4(ChangeEventSourceCoordinator.java:253)
      at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
      at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
      at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
      at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
      at java.base/java.lang.Thread.run(Thread.java:1583) 

      Feature request or enhancement

      For feature requests or enhancements, provide this information, please:

      Which use case/requirement will be addressed by the proposed feature?

      <Your answer>

      Implementation ideas (optional)

      The bug should be fixed if we remove the inner `try` and move the `catch` and `finally` to out `try` in `ChangeEventSourceCoordinator.java`

       

              Unassigned Unassigned
              bhagyashreegoyal Bhagyashree Goyal
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: