Details
-
Bug
-
Resolution: Obsolete
-
Major
-
None
-
2.2.1.Final
-
None
-
False
-
None
-
False
-
Important
Description
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
MSSQL Connector 2.2.1
What is the connector configuration?
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.history.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
tasks.max=1
schema.history.internal.consumer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.consumer.security.protocol=SASL_SSL
include.schema.changes=true
topic.prefix=employeesdebezium-mssql
schema.history.internal.kafka.topic=employees-debezium-schema-history
schema.history.internal.producer.security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
signal.data.collection=my_db.my_schema.dbz_signal
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter=org.apache.kafka.connect.json.JsonConverter
database.history.sasl.mechanism=AWS_MSK_IAM
schema.history.internal.producer.sasl.mechanism=AWS_MSK_IAM
database.history.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.history.producer.sasl.mechanism=AWS_MSK_IAM
database.history.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
database.encrypt=false
schema.history.internal.consumer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.user=admin
database.names=my_db
database.server.id=*****
schema.history.internal.producer.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.history.producer.security.protocol=SASL_SSL
schema.history.internal.kafka.bootstrap.servers=********
topics.regex=debezium-employees-msql*
database.port=****
key.converter.schemas.enable=false
max.request.size=10000000
producer.override.max.request.size=10000000
database.history.sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
database.hostname=**********
database.password=***********
value.converter.schemas.enable=false
name=employees-debezium-mssql-source8
schema.history.internal.producer.sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
schema.history.internal.consumer.sasl.mechanism=AWS_MSK_IAM
database.history.security.protocol=SASL_SSL
table.include.list=my_schema.employees,my_schema.dbz_signal
database.history.consumer.sasl.mechanism=AWS_MSK_IAM
schema.history.internal.consumer.security.protocol=SASL_SSL
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
RDS (AWS) Deployment of SQL Server
Engine Version:
15.00.4316.3.v1
Engine:
SQL Server Standard Edition
What behaviour do you expect?
1.
Snapshot performed with following data: (signaling is enabled and included in table.include.list).
INSERT INTO my_schema.dbz_signal (id, type, data) VALUES('full-snapshot6', 'execute-snapshot', '{"data-collections": ["my_db.my_schema.employees"], "surrogate-key": "id"}');
All records in table my_db.my_schema.employees are streamed into kafka topic.
What behaviour do you see?
Only row containing max(id) is streamed into kafka topic. All other rows are not read. If adding additional-condition (such as id < 9), then the row with id=8 will be streamed into kafka topic, but no others will be.
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
Not tested yet.
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
-
Timestamp
Message
Log stream name
2023-09-22T11:36:01.878+01:00
Copy
[2023-09-22 10:36:01,878] INFO [employees-debezium-mssql-source8\|task-0\|offsets] WorkerSourceTask{id=employees-debezium-mssql-source8-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:490)[2023-09-22 10:36:01,878] INFO [employees-debezium-mssql-source8\|task-0\|offsets] WorkerSourceTask{id=employees-debezium-mssql-source8-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:490) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333
2023-09-22T11:36:01.878+01:00
Copy
[2023-09-22 10:36:01,878] INFO [employees-debezium-mssql-source8\|task-0\|offsets] WorkerSourceTask{id=employees-debezium-mssql-source8-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:507)[2023-09-22 10:36:01,878] INFO [employees-debezium-mssql-source8\|task-0\|offsets] WorkerSourceTask{id=employees-debezium-mssql-source8-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:507) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333
2023-09-22T11:36:04.963+01:00
Copy
[2023-09-22 10:36:04,963] INFO [employees-debezium-mssql-source8\|task-0] Requested 'INCREMENTAL' snapshot of data collections '[my_db.my_schema.employees]' with additional condition 'No condition passed' and surrogate key 'id' (io.debezium.pipeline.signal.ExecuteSnapshot:52)[2023-09-22 10:36:04,963] INFO [employees-debezium-mssql-source8\|task-0] Requested 'INCREMENTAL' snapshot of data collections '[my_db.my_schema.employees]' with additional condition 'No condition passed' and surrogate key 'id' (io.debezium.pipeline.signal.ExecuteSnapshot:52) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333
2023-09-22T11:36:04.967+01:00
Copy
[2023-09-22 10:36:04,966] INFO [employees-debezium-mssql-source8\|task-0] Incremental snapshot for table 'my_db.my_schema.employees' will end at position [11] (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:362)[2023-09-22 10:36:04,966] INFO [employees-debezium-mssql-source8\|task-0] Incremental snapshot for table 'my_db.my_schema.employees' will end at position [11] (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:362) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333
2023-09-22T11:36:05.024+01:00
Copy
[2023-09-22 10:36:05,024] INFO [employees-debezium-mssql-source8\|task-0] 12 records sent during previous 00:19:36.527, last recorded offset of {server=employeesdebezium-mssql, database=my_db} partition is {transaction_id=null, event_serial_no=1, incremental_snapshot_maximum_key=aced0005757200135b4c6a6176612e6c616e672e4f626a6563743b90ce589f1073296c020000787000000001737200116a6176612e6c616e672e496e746567657212e2a0a4f781873802000149000576616c7565787200106a6176612e6c616e672e4e756d62657286ac951d0b94e08b02000078700000000b, commit_lsn=00000036:00000896:0003, change_lsn=00000036:00000896:0002, incremental_snapshot_collections=[{"incremental_snapshot_collections_id":"my_db.my_schema.employees","incremental_snapshot_collections_additional_condition":null,"incremental_snapshot_collections_surrogate_key":"id"}], incremental_snapshot_primary_key=aced000570} (io.debezium.connector.common.BaseSourceTask:195)[2023-09-22 10:36:05,024] INFO [employees-debezium-mssql-source8\|task-0] 12 records sent during previous 00:19:36.527, last recorded offset of {server=employeesdebezium-mssql, database=my_db} partition is {transaction_id=null, event_serial_no=1, incremental_snapshot_maximum_key=aced0005757200135b4c6a6176612e6c616e672e4f626a6563743b90ce589f1073296c020000787000000001737200116a6176612e6c616e672e496e746567657212e2a0a4f781873802000149000576616c7565787200106a6176612e6c616e672e4e756d62657286ac951d0b94e08b02000078700000000b, commit_lsn=00000036:00000896:0003, change_lsn=00000036:00000896:0002, incremental_snapshot_collections=[{"incremental_snapshot_collections_id":"my_db.my_schema.employees","incremental_snapshot_collections_additional_condition":null,"incremental_snapshot_collections_surrogate_key":"id"}], incremental_snapshot_primary_key=aced000570} (io.debezium.connector.common.BaseSourceTask:195) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333
2023-09-22T11:36:09.967+01:00
Copy
[2023-09-22 10:36:09,967] INFO [employees-debezium-mssql-source8\|task-0] No data returned by the query, incremental snapshotting of table 'my_db.my_schema.employees' finished (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:368)[2023-09-22 10:36:09,967] INFO [employees-debezium-mssql-source8\|task-0] No data returned by the query, incremental snapshotting of table 'my_db.my_schema.employees' finished (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:368) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333
2023-09-22T11:36:14.963+01:00
Copy
[2023-09-22 10:36:14,963] INFO [employees-debezium-mssql-source8\|task-0] Skipping read chunk because snapshot is not running (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:311)[2023-09-22 10:36:14,963] INFO [employees-debezium-mssql-source8\|task-0] Skipping read chunk because snapshot is not running (io.debezium.pipeline.source.snapshot.incremental.AbstractIncrementalSnapshotChangeEventSource:311) ecs_agora/agora-ecs-dev-kafka-connect/deb4427cbb2248f0a416aa73ef558333 How to reproduce the issue using our tutorial deployment?
TBD