-
Bug
-
Resolution: Done
-
Major
-
1.5.0.Beta2
-
None
-
False
-
False
-
Undefined
-
-
I configured cdc for a table using oracle logminer.
This table has a timestamp(6) column.
The resulting schema looks like this:
{"type":"int64","optional":true,"name":"io.debezium.time.MicroTimestamp","version":1,"field":"column_name"}
And the payload like this:
"column_name":1606521600000000
My goal was to insert the date into another DB (DB2) via jdbc sink connector but i have problems to insert this timestamp into DB2. I tried to use the org.apache.kafka.connect.transforms.TimestampConverter to convert the payload for column_name to a timestamp. But this leads to an error (Year exceeds the maximum "9999")
So i tried to change the time.precision.mode to adaptive_time_microseconds without success. After this i tried time.precision.mode=connect with the same result.
Digging around in the debezium-connector-oracle i found this.
The TemporalPrecisionMode is hardcoded into the constructor of the OracleValueConverters class.
This all leads to 2 questions:
Is there a reason for this fixed TemporalPrecisionMode?
Is there any other way to convert the value into a valid timestamp for DB2?(with existing Transformations)