Uploaded image for project: 'Debezium'
  1. Debezium
  2. DBZ-6101

GCP Spanner connector start failing when there are multiple indexes on a single column

XMLWordPrintable

    • Critical

       

      Bug report

      What Debezium connector do you use and what version?

      debezium-connector-spanner 
      v2.2.0.Alpha1

      What is the connector configuration?

      {
      "name": "livestream-connector",
      "config":
      
      { "connector.class": "io.debezium.connector.spanner.SpannerConnector", "tasks.max": "1", "gcp.spanner.change.stream": "live-table", "gcp.spanner.project.id": "project-stag, "gcp.spanner.instance.id": "development-instance", "gcp.spanner.database.id": "live-db", "gcp.spanner.credentials.path": "/app/services-sa.json", "key.converter.schemas.enable":false, "value.converter.schemas.enable":false }
      
      }
      

      What is the captured database version and mode of depoyment?

      GCP Managed Spanner Database

      What behaviour do you expect?

      The spanner connector should be able to start even when there are multiple indexes containing same column

      What behaviour do you see?

      The connector is not able to start 

      Do you see the same behaviour using the latest relesead Debezium version?

      Yes

      Do you have the connector logs, ideally from start till finish?

      Caused by: java.lang.IllegalStateException: Duplicate key host_id (attempted merging values io.debezium.connector.spanner.db.model.schema.Column@4994ffd7 and io.debezium.connector.spanner.db.model.schema.Column@3426cd85)
      at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:133)
      at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180)
      at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
      at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
      at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
      at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
      at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
      at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
      at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
      at io.debezium.connector.spanner.db.model.schema.TableSchema.<init>(TableSchema.java:23)
      at io.debezium.connector.spanner.db.model.schema.SpannerSchema$SpannerSchemaBuilder.lambda$build$0(SpannerSchema.java:62)
      at java.base/java.util.HashMap.forEach(HashMap.java:1337)
      at io.debezium.connector.spanner.db.model.schema.SpannerSchema$SpannerSchemaBuilder.build(SpannerSchema.java:60)
      at io.debezium.connector.spanner.db.dao.SchemaDao.getSchema(SchemaDao.java:51)
      at io.debezium.connector.spanner.db.metadata.SchemaRegistry.forceUpdateSchema(SchemaRegistry.java:117)
      at io.debezium.connector.spanner.db.metadata.SchemaRegistry.init(SchemaRegistry.java:49)
      at io.debezium.connector.spanner.task.SynchronizationTaskContext.init(SynchronizationTaskContext.java:191)
      ... 15 more
      

      How to reproduce the issue using our tutorial deployment?

      Start the debezium connector spanner with multiple indexes on same table column

      Implementation ideas (optional)

      Before doing Collectors.toMap filter duplicate columns.

            Unassigned Unassigned
            shantanu-sharechat Shantanu Sharma (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: