Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-6543

Spark connector fails with CCE when compatibility mode is enabled

    XMLWordPrintable

Details

    Description

      Fails with

      org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=125 returned server error (status=0x85): java.lang.ClassCastException: java.lang.Integer cannot be cast to [B
              at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343)
              at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132)
              at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118)
              at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56)
              at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:48)
              at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:100)
              at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:84)
              at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13)
              at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
              at org.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41)
              at org.apache.spark.RangePartitioner$$anonfun$9.apply(Partitioner.scala:261)
              at org.apache.spark.RangePartitioner$$anonfun$9.apply(Partitioner.scala:259)
              at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
              at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
              at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
              at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
              at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
              at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
              at org.apache.spark.scheduler.Task.run(Task.scala:89)
              at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
      

      Attachments

        Issue Links

          Activity

            People

              gfernand@redhat.com Gustavo Fernandes (Inactive)
              vjuranek@redhat.com Vojtech Juranek
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: