-
Bug
-
Resolution: Won't Do
-
Major
-
None
-
0.10.0.Final
-
None
-
Debezium: 0.10
Kafka: 2.3
Postgresql: 11.6 (docker: debezium/example-postgres:0.10)
Using a nodejs script to modify the pg database every 200 milliseconds.
Then, I updated "table.whitelist" by the REST API.
Then, I found that Kafka lost three pieces of data.
Attachment pg_loss.log is the full log : pg_loss.log
Follow is the Debezium-connect docker config:
connect1:
image: debezium/connect:0.10
ports:
- 8088:8083
environment:
- HEAP_OPTS=-Xmx512M -Xms256M
- LOG_LEVEL=DEBUG
- BOOTSTRAP_SERVERS=192.168.4.22:9092,192.168.4.22:9093
- GROUP_ID=3
- CONFIG_STORAGE_TOPIC=my_connect_configs_222
- OFFSET_STORAGE_TOPIC=my_connect_offsets_222
- STATUS_STORAGE_TOPIC=my_connect_statuses_222
- CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE=false
- CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE=false
# - CONNECT_CONSUMER_MAX_PARTITION_FETCH_BYTES=15728640
# - CONNECT_PRODUCER_MAX_PARTITION_FETCH_BYTES=15728640
# - CONNECT_MAX_REQUEST_SIZE=15728640
- CONNECT_PRODUCER_MAX_REQUEST_SIZE=20971520
- CONNECT_DATABASE_HISTORY_KAFKA_RECOVERY_POLL_INTERVAL_MS=10000
Follow is the connector config:
{
"name": "test222_pg_inventory_1",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"database.user": "postgres",
"database.dbname": "postgres",
"slot.name": "test222_pg_inventory_1",
"database.server.name": "test222_pg_inventory_1",
"database.port": "5432",
"plugin.name": "pgoutput",
"schema.whitelist": "inventory",
"table.whitelist": "inventory.test_debezium_1",
"slot.drop_on_stop": "true",
"decimal.handling.mode": "string",
"database.hostname": "192.168.4.21",
"database.password": "postgres",
"snapshot.mode": "never"
}
}
Follow are the key time nodes:
2019-12-06 07:11:53.197: start editor the database, every 200 milliseconds
2019-12-06 07:12:21,533: update connector config (update table.whitelist)
2019-12-06 15:13:05.263: stop change data
- is related to
-
DBZ-1666 Generate warning for connectors with automatically dropped slots
-
- Closed
-