-
Bug
-
Resolution: Done
-
Major
-
2.7.0.Beta2
-
None
In order to make your issue reports as actionable as possible, please provide the following information, depending on the issue type.
Bug report
For bug reports, provide this information, please:
What Debezium connector do you use and what version?
2.7.0
What is the connector configuration?
"config": { "heartbeat.interval.ms": "3000", "autoReconnect":"true", "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector", "tasks.max": "1", "connection.url":"jdbc:postgresql://postgres:5432/sink", "connection.username": "postgres", "connection.password": "postgres", "insert.mode": "upsert", "delete.enabled": "true", "primary.key.mode": "record_key", "schema.evolution": "basic", "database.time_zone": "UTC", "auto.evolve": "true", "quote.identifiers":"true", "auto.create":"true", "topics.regex":"debezium_.*", "pk.mode" :"kafka", "transforms": "dropPrefix", "transforms.dropPrefix.type": "org.apache.kafka.connect.transforms.RegexRouter", "transforms.dropPrefix.regex": "debezium_(.*)_\\.(.*)\\.(.*)", "transforms.dropPrefix.replacement": "$1_$3" }
What is the captured database version and mode of depoyment?
(E.g. on-premises, with a specific cloud provider, etc.)
Postgres, on-premise
What behaviour do you expect?
Record is stored in DB
What behaviour do you see?
When a table has multiple columns of array types (but storing different types internally), i.e.
text_type -> text[]
uuid_type -> uuid[]
Current implementation of ARRAY in postgresql dialect doesn't work.
If we have more then one array type in table definition, they reuse last element descriptor instead of creating new one.
modes -> text[]
routes -> text[]
operator_ids -> uuid[]
Due to the bug, the output is:
modes -> uuid[]
routes -> uuid[]
operator_ids -> uuid[]
Error:
org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record
Do you see the same behaviour using the latest relesead Debezium version?
(Ideally, also verify with latest Alpha/Beta/CR version)
Yes.
Do you have the connector logs, ideally from start till finish?
(You might be asked later to provide DEBUG/TRACE level log)
Yes, but those are 30MB
2024-06-10 07:02:07,798 ERROR || Batch entry 0 INSERT INTO "public"."product_product_revision" ("id","product_id","product_status","version","author_id","status","status_reason","key","name","description","location","effective_date","expiration_date","price","currency","activation_type","modes","max_rides","duration","routes","zones","validity_start_date","validity_period","validity_end_date","device_binding","ownership","type","transport_binding","payout_config","created_at","deleted_at","updated_at","check_in_mode","check_out_mode","revised_by","revised_at","mode_config","operational_hours_start","operational_hours_end","termination_event_sent","operator_ids") VALUES (cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3b') as uuid),cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3b') as uuid),('statx'),('1'::int8),cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3a') as uuid),('statud'),('xxx'),('asdasd'),('sdfsdf'),('dfgdfg'),('fghfgh'),('2024-06-06'::date),('2024-06-06'::date),('11.10'::numeric),('EUR'),('sdfsdf'),('{"a","b"}'),('1'::int8),cast((NULL) as interval),(NULL),(NULL),(NULL),cast((NULL) as interval),(NULL),('TRUE'),('fgdfgfdg'),('dfgdfg'),('TRUE'),cast((NULL) as json),('2024-06-06 11:44:10.660442+00'::timestamp),(NULL),(NULL),('NOT_REQUIRED'),('sdfsdf'),cast((NULL) as uuid),(NULL),cast((NULL) as json),(NULL),(NULL),('FALSE'),('{"8be0ede3-96ed-48c9-bc1d-81f341a7ab3e","8be0ede3-96ed-48c9-bc1d-81f341a7ab3a"}')) ON CONFLICT ("id") DO UPDATE SET "product_id"=EXCLUDED."product_id","product_status"=EXCLUDED."product_status","version"=EXCLUDED."version","author_id"=EXCLUDED."author_id","status"=EXCLUDED."status","status_reason"=EXCLUDED."status_reason","key"=EXCLUDED."key","name"=EXCLUDED."name","description"=EXCLUDED."description","location"=EXCLUDED."location","effective_date"=EXCLUDED."effective_date","expiration_date"=EXCLUDED."expiration_date","price"=EXCLUDED."price","currency"=EXCLUDED."currency","activation_type"=EXCLUDED."activation_type","modes"=EXCLUDED."modes","max_rides"=EXCLUDED."max_rides","duration"=EXCLUDED."duration","routes"=EXCLUDED."routes","zones"=EXCLUDED."zones","validity_start_date"=EXCLUDED."validity_start_date","validity_period"=EXCLUDED."validity_period","validity_end_date"=EXCLUDED."validity_end_date","device_binding"=EXCLUDED."device_binding","ownership"=EXCLUDED."ownership","type"=EXCLUDED."type","transport_binding"=EXCLUDED."transport_binding","payout_config"=EXCLUDED."payout_config","created_at"=EXCLUDED."created_at","deleted_at"=EXCLUDED."deleted_at","updated_at"=EXCLUDED."updated_at","check_in_mode"=EXCLUDED."check_in_mode","check_out_mode"=EXCLUDED."check_out_mode","revised_by"=EXCLUDED."revised_by","revised_at"=EXCLUDED."revised_at","mode_config"=EXCLUDED."mode_config","operational_hours_start"=EXCLUDED."operational_hours_start","operational_hours_end"=EXCLUDED."operational_hours_end","termination_event_sent"=EXCLUDED."termination_event_sent","operator_ids"=EXCLUDED."operator_ids" was aborted: ERROR: invalid input syntax for type uuid: "a" Where: unnamed portal parameter $17 = '...' Call getNextException to see other errors in the batch. [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] 2024-06-10 07:02:07,798 ERROR || ERROR: invalid input syntax for type uuid: "a" Where: unnamed portal parameter $17 = '...' [org.hibernate.engine.jdbc.spi.SqlExceptionHelper] 2024-06-10 07:02:07,799 DEBUG || rolling back [org.hibernate.engine.transaction.internal.TransactionImpl] 2024-06-10 07:02:07,799 TRACE || Preparing to rollback transaction via JDBC Connection.rollback() [org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor] 2024-06-10 07:02:07,800 TRACE || Transaction rolled-back via JDBC Connection.rollback() [org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor] 2024-06-10 07:02:07,800 TRACE || LogicalConnection#afterTransaction [org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor] 2024-06-10 07:02:07,800 TRACE || Releasing JDBC resources [org.hibernate.resource.jdbc.internal.ResourceRegistryStandardImpl] 2024-06-10 07:02:07,800 DEBUG || Initiating JDBC connection release from afterTransaction [org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl] 2024-06-10 07:02:07,800 TRACE || Releasing JDBC resources [org.hibernate.resource.jdbc.internal.ResourceRegistryStandardImpl] 2024-06-10 07:02:07,800 TRACE || com.mchange.v2.async.ThreadPoolAsynchronousRunner@135758b: Adding task to queue -- com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@5b60bb72 [com.mchange.v2.async.ThreadPoolAsynchronousRunner] 2024-06-10 07:02:07,800 TRACE || trace com.mchange.v2.resourcepool.BasicResourcePool@6bae2e47 [managed: 5, unused: 4, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection@30722251) [com.mchange.v2.resourcepool.BasicResourcePool] 2024-06-10 07:02:07,800 TRACE || ResourceLocalTransactionCoordinatorImpl#afterCompletionCallback(false) [org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl] 2024-06-10 07:02:07,800 TRACE || SynchronizationRegistryStandardImpl.notifySynchronizationsAfterTransactionCompletion(5) [org.hibernate.resource.transaction.internal.SynchronizationRegistryStandardImpl] 2024-06-10 07:02:07,800 TRACE || LogicalConnection#afterTransaction [org.hibernate.resource.jdbc.internal.AbstractLogicalConnectionImplementor] 2024-06-10 07:02:07,800 TRACE || Releasing JDBC resources [org.hibernate.resource.jdbc.internal.ResourceRegistryStandardImpl] 2024-06-10 07:02:07,800 DEBUG || Initiating JDBC connection release from afterTransaction [org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl] 2024-06-10 07:02:07,800 ERROR || Failed to process record: Failed to process a sink record [io.debezium.connector.jdbc.JdbcSinkConnectorTask] org.apache.kafka.connect.errors.ConnectException: Failed to process a sink record at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffer(JdbcChangeEventSink.java:229) at io.debezium.connector.jdbc.JdbcChangeEventSink.lambda$flushBuffers$2(JdbcChangeEventSink.java:207) at java.base/java.util.HashMap.forEach(HashMap.java:1337) at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffers(JdbcChangeEventSink.java:207) at io.debezium.connector.jdbc.JdbcChangeEventSink.execute(JdbcChangeEventSink.java:159) at io.debezium.connector.jdbc.JdbcSinkConnectorTask.put(JdbcSinkConnectorTask.java:103) at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:350) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:250) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:219) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:237) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.hibernate.exception.DataException: error executing work [Batch entry 0 INSERT INTO "public"."product_product_revision" ("id","product_id","product_status","version","author_id","status","status_reason","key","name","description","location","effective_date","expiration_date","price","currency","activation_type","modes","max_rides","duration","routes","zones","validity_start_date","validity_period","validity_end_date","device_binding","ownership","type","transport_binding","payout_config","created_at","deleted_at","updated_at","check_in_mode","check_out_mode","revised_by","revised_at","mode_config","operational_hours_start","operational_hours_end","termination_event_sent","operator_ids") VALUES (cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3b') as uuid),cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3b') as uuid),('statx'),('1'::int8),cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3a') as uuid),('statud'),('xxx'),('asdasd'),('sdfsdf'),('dfgdfg'),('fghfgh'),('2024-06-06'::date),('2024-06-06'::date),('11.10'::numeric),('EUR'),('sdfsdf'),('{"a","b"}'),('1'::int8),cast((NULL) as interval),(NULL),(NULL),(NULL),cast((NULL) as interval),(NULL),('TRUE'),('fgdfgfdg'),('dfgdfg'),('TRUE'),cast((NULL) as json),('2024-06-06 11:44:10.660442+00'::timestamp),(NULL),(NULL),('NOT_REQUIRED'),('sdfsdf'),cast((NULL) as uuid),(NULL),cast((NULL) as json),(NULL),(NULL),('FALSE'),('{"8be0ede3-96ed-48c9-bc1d-81f341a7ab3e","8be0ede3-96ed-48c9-bc1d-81f341a7ab3a"}')) ON CONFLICT ("id") DO UPDATE SET "product_id"=EXCLUDED."product_id","product_status"=EXCLUDED."product_status","version"=EXCLUDED."version","author_id"=EXCLUDED."author_id","status"=EXCLUDED."status","status_reason"=EXCLUDED."status_reason","key"=EXCLUDED."key","name"=EXCLUDED."name","description"=EXCLUDED."description","location"=EXCLUDED."location","effective_date"=EXCLUDED."effective_date","expiration_date"=EXCLUDED."expiration_date","price"=EXCLUDED."price","currency"=EXCLUDED."currency","activation_type"=EXCLUDED."activation_type","modes"=EXCLUDED."modes","max_rides"=EXCLUDED."max_rides","duration"=EXCLUDED."duration","routes"=EXCLUDED."routes","zones"=EXCLUDED."zones","validity_start_date"=EXCLUDED."validity_start_date","validity_period"=EXCLUDED."validity_period","validity_end_date"=EXCLUDED."validity_end_date","device_binding"=EXCLUDED."device_binding","ownership"=EXCLUDED."ownership","type"=EXCLUDED."type","transport_binding"=EXCLUDED."transport_binding","payout_config"=EXCLUDED."payout_config","created_at"=EXCLUDED."created_at","deleted_at"=EXCLUDED."deleted_at","updated_at"=EXCLUDED."updated_at","check_in_mode"=EXCLUDED."check_in_mode","check_out_mode"=EXCLUDED."check_out_mode","revised_by"=EXCLUDED."revised_by","revised_at"=EXCLUDED."revised_at","mode_config"=EXCLUDED."mode_config","operational_hours_start"=EXCLUDED."operational_hours_start","operational_hours_end"=EXCLUDED."operational_hours_end","termination_event_sent"=EXCLUDED."termination_event_sent","operator_ids"=EXCLUDED."operator_ids" was aborted: ERROR: invalid input syntax for type uuid: "a" Where: unnamed portal parameter $17 = '...' Call getNextException to see other errors in the batch.] [n/a] at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:101) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:56) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:108) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:94) at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:308) at org.hibernate.internal.AbstractSharedSessionContract.doWork(AbstractSharedSessionContract.java:977) at org.hibernate.internal.AbstractSharedSessionContract.doWork(AbstractSharedSessionContract.java:965) at io.debezium.connector.jdbc.RecordWriter.write(RecordWriter.java:51) at io.debezium.connector.jdbc.JdbcChangeEventSink.flushBuffer(JdbcChangeEventSink.java:222) ... 17 more Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "public"."product_product_revision" ("id","product_id","product_status","version","author_id","status","status_reason","key","name","description","location","effective_date","expiration_date","price","currency","activation_type","modes","max_rides","duration","routes","zones","validity_start_date","validity_period","validity_end_date","device_binding","ownership","type","transport_binding","payout_config","created_at","deleted_at","updated_at","check_in_mode","check_out_mode","revised_by","revised_at","mode_config","operational_hours_start","operational_hours_end","termination_event_sent","operator_ids") VALUES (cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3b') as uuid),cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3b') as uuid),('statx'),('1'::int8),cast(('8be0ede3-96ed-48c9-bc1d-81f341a7ab3a') as uuid),('statud'),('xxx'),('asdasd'),('sdfsdf'),('dfgdfg'),('fghfgh'),('2024-06-06'::date),('2024-06-06'::date),('11.10'::numeric),('EUR'),('sdfsdf'),('{"a","b"}'),('1'::int8),cast((NULL) as interval),(NULL),(NULL),(NULL),cast((NULL) as interval),(NULL),('TRUE'),('fgdfgfdg'),('dfgdfg'),('TRUE'),cast((NULL) as json),('2024-06-06 11:44:10.660442+00'::timestamp),(NULL),(NULL),('NOT_REQUIRED'),('sdfsdf'),cast((NULL) as uuid),(NULL),cast((NULL) as json),(NULL),(NULL),('FALSE'),('{"8be0ede3-96ed-48c9-bc1d-81f341a7ab3e","8be0ede3-96ed-48c9-bc1d-81f341a7ab3a"}')) ON CONFLICT ("id") DO UPDATE SET "product_id"=EXCLUDED."product_id","product_status"=EXCLUDED."product_status","version"=EXCLUDED."version","author_id"=EXCLUDED."author_id","status"=EXCLUDED."status","status_reason"=EXCLUDED."status_reason","key"=EXCLUDED."key","name"=EXCLUDED."name","description"=EXCLUDED."description","location"=EXCLUDED."location","effective_date"=EXCLUDED."effective_date","expiration_date"=EXCLUDED."expiration_date","price"=EXCLUDED."price","currency"=EXCLUDED."currency","activation_type"=EXCLUDED."activation_type","modes"=EXCLUDED."modes","max_rides"=EXCLUDED."max_rides","duration"=EXCLUDED."duration","routes"=EXCLUDED."routes","zones"=EXCLUDED."zones","validity_start_date"=EXCLUDED."validity_start_date","validity_period"=EXCLUDED."validity_period","validity_end_date"=EXCLUDED."validity_end_date","device_binding"=EXCLUDED."device_binding","ownership"=EXCLUDED."ownership","type"=EXCLUDED."type","transport_binding"=EXCLUDED."transport_binding","payout_config"=EXCLUDED."payout_config","created_at"=EXCLUDED."created_at","deleted_at"=EXCLUDED."deleted_at","updated_at"=EXCLUDED."updated_at","check_in_mode"=EXCLUDED."check_in_mode","check_out_mode"=EXCLUDED."check_out_mode","revised_by"=EXCLUDED."revised_by","revised_at"=EXCLUDED."revised_at","mode_config"=EXCLUDED."mode_config","operational_hours_start"=EXCLUDED."operational_hours_start","operational_hours_end"=EXCLUDED."operational_hours_end","termination_event_sent"=EXCLUDED."termination_event_sent","operator_ids"=EXCLUDED."operator_ids" was aborted: ERROR: invalid input syntax for type uuid: "a" Where: unnamed portal parameter $17 = '...' Call getNextException to see other errors in the batch. at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:165) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2402) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:574) at org.postgresql.jdbc.PgStatement.internalExecuteBatch(PgStatement.java:896) at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:919) at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1685) at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(NewProxyPreparedStatement.java:2544) at io.debezium.connector.jdbc.RecordWriter.lambda$processBatch$0(RecordWriter.java:91) at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:37) at org.hibernate.internal.AbstractSharedSessionContract.lambda$doWork$4(AbstractSharedSessionContract.java:966) at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:303) ... 21 more Caused by: org.postgresql.util.PSQLException: ERROR: invalid input syntax for type uuid: "a" Where: unnamed portal parameter $17 = '...' at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2713) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2401) ... 30 more
How to reproduce the issue using our tutorial deployment?
<Your answer>
Feature request or enhancement
For feature requests or enhancements, provide this information, please:
Which use case/requirement will be addressed by the proposed feature?
Support for multiple columns of ARRAY type in single table, but storing different internal type. I.e. text[], uuid[]
Implementation ideas (optional)
PR will be raised by me.
- is caused by
-
DBZ-7752 Support for ARRAY data types for postgres
- Closed
- links to
-
RHEA-2024:139598 Red Hat build of Debezium 2.5.4 release