/qa/tools/opt/amd64/jdk1.8.0_last/bin/java -Xms24g -Xmx24g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Duse_intern=false -Dprint.results=false -DnumPartitions=32 -Dnum_executions=10 -Dram_percentage=3.6 -Dlog4j.configuration=file:/home/hudson/hudson_workspace/workspace/mcimbora-spark-wf-perf/etc/log4j/log4j-info.xml -Dserver.config=/home/hudson/hudson_workspace/workspace/mcimbora-spark-wf-perf/etc/configs/distro/ISPN8/server.xml -DhotrodServer=172.18.1.18 -Dispn.server.list=172.18.1.18:11222 -Dspark.master.host=172.18.1.2 -Dspark.home=/home/hudson/users-tmp/rmacor/spark-1.5.1-bin-hadoop2.6 -Dispn.home=/home/mcimbora/temp/server/infinispan-server-8.2.1-SNAPSHOT -Dsegments.count=512 -Djboss.default.multicast.address=234.99.54.15 -Djgroups.udp.bind_addr=172.18.1.4 -Djava.net.preferIPv4Stack=true -Dlog4j.file.prefix=edg-perf02.mw.lab.eng.bos.redhat.com-7257 -classpath /qa/home/mcimbora/radargun/RadarGun/lib/HdrHistogram-2.1.0.jar:/qa/home/mcimbora/radargun/RadarGun/lib/hdrhistogram-2.2.0-SNAPSHOT.jar:/qa/home/mcimbora/radargun/RadarGun/lib/jboss-transaction-api-1.0.1.GA.jar:/qa/home/mcimbora/radargun/RadarGun/lib/log4j-1.2.16.jar:/qa/home/mcimbora/radargun/RadarGun/lib/radargun-core-2.2.0-SNAPSHOT.jar:/qa/home/mcimbora/radargun/RadarGun/lib/radargun-core-2.2.0-SNAPSHOT-tests.jar:/qa/home/mcimbora/radargun/RadarGun/conf:/qa/tools/opt/amd64/jdk1.8.0_last/lib/tools.jar org.radargun.Slave --master 172.18.1.15 --slaveIndex 1 -------------------------------------------------------------------------------- 12:13:07,703 INFO [org.radargun.RemoteMasterConnection] (main) Attempting to connect to master 172.18.1.15:2103 12:13:09,723 INFO [org.radargun.RemoteMasterConnection] (main) Successfully established connection with master at: 172.18.1.15:2103 12:13:14,216 INFO [org.radargun.Slave] (main) Received slave index 1 12:13:14,216 INFO [org.radargun.Slave] (main) Received slave count 26 12:13:15,087 INFO [org.radargun.Slave] (sc-main) Service is SparkDriverService {appName=testApp, host=172.18.1.2, port=7077, properties=[ KeyValueProperty{key='spark.executor.memory', value='24g'}, KeyValueProperty{key='spark.eventLog.enabled', value='true'}, KeyValueProperty{key='spark.serializer', value='org.apache.spark.serializer.KryoSerializer'} ], sourceClass=org.radargun.service.demo.ispn.WordCountSource, sourceProperties=[ KeyValueProperty{key='setHotrodServer', value='172.18.1.18'}, KeyValueProperty{key='setHotrodPort', value='11222'}, KeyValueProperty{key='setNumPartitions', value='32'} ] } 12:13:16,570 INFO [org.radargun.Slave] (sc-main) Starting stage ScenarioInit 9.817: [GC (System.gc()) [PSYoungGen: 629145K->7535K(7340032K)] 629145K->7615K(24117248K), 0.0297380 secs] [Times: user=0.08 sys=0.02, real=0.02 secs] 9.847: [Full GC (System.gc()) [PSYoungGen: 7535K->0K(7340032K)] [ParOldGen: 80K->7194K(16777216K)] 7615K->7194K(24117248K), [Metaspace: 14330K->14330K(1062912K)], 0.0984616 secs] [Times: user=0.23 sys=0.04, real=0.09 secs] 12:13:16,701 INFO [org.radargun.Slave] (sc-main) Finished stage ScenarioInit 12:13:16,718 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:13:17,254 INFO [org.radargun.Slave] (sc-main) Starting stage Define 12:13:17,255 INFO [org.radargun.Slave] (sc-main) Finished stage Define 12:13:17,256 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:13:17,298 INFO [org.radargun.Slave] (sc-main) Starting stage RepeatBegin 12:13:17,299 INFO [org.radargun.Slave] (sc-main) Finished stage RepeatBegin 12:13:17,300 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:13:17,339 INFO [org.radargun.Slave] (sc-main) Stage should not be executed 12:13:17,340 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:13:19,604 INFO [org.radargun.Slave] (sc-main) Stage should not be executed 12:13:19,605 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:13:36,462 INFO [org.radargun.Slave] (sc-main) Starting stage ServiceStart 12:13:36,463 INFO [org.radargun.stages.lifecycle.ServiceStartStage] (sc-main) Startup staggering, this is the slave with index 0, not sleeping 12:13:36,463 INFO [org.radargun.stages.lifecycle.ServiceStartStage] (sc-main) Ack master's StartCluster stage. Local address is: /172.18.1.3. This slave's index is: 1 12:13:37,832 WARN [org.apache.hadoop.util.NativeCodeLoader] (sc-main) Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 31.715: [GC (Metadata GC Threshold) [PSYoungGen: 503316K->10186K(7340032K)] 510511K->17389K(24117248K), 0.0287162 secs] [Times: user=0.09 sys=0.01, real=0.03 secs] 31.744: [Full GC (Metadata GC Threshold) [PSYoungGen: 10186K->0K(7340032K)] [ParOldGen: 7202K->12462K(16777216K)] 17389K->12462K(24117248K), [Metaspace: 24339K->24339K(1071104K)], 0.1021998 secs] [Times: user=0.28 sys=0.03, real=0.10 secs] 12:13:39,344 INFO [akka.event.slf4j.Slf4jLogger] (sparkDriver-akka.actor.default-dispatcher-2) Slf4jLogger started 12:13:39,433 INFO [Remoting] (sparkDriver-akka.actor.default-dispatcher-2) Starting remoting 12:13:39,930 INFO [Remoting] (sparkDriver-akka.actor.default-dispatcher-2) Remoting started; listening on addresses :[akka.tcp://sparkDriver@10.16.90.107:42801] 12:13:40,417 INFO [org.spark-project.jetty.server.Server] (sc-main) jetty-8.y.z-SNAPSHOT 12:13:40,448 INFO [org.spark-project.jetty.server.AbstractConnector] (sc-main) Started SocketConnector@0.0.0.0:43466 12:13:40,735 INFO [org.spark-project.jetty.server.Server] (sc-main) jetty-8.y.z-SNAPSHOT 12:13:40,780 INFO [org.spark-project.jetty.server.AbstractConnector] (sc-main) Started SelectChannelConnector@0.0.0.0:4040 12:13:41,040 WARN [org.apache.spark.metrics.MetricsSystem] (sc-main) Using default name DAGScheduler for source because spark.app.id is not set. 35.737: [GC (Metadata GC Threshold) [PSYoungGen: 1761994K->20886K(7340032K)] 1774457K->33357K(24117248K), 0.0390135 secs] [Times: user=0.10 sys=0.01, real=0.04 secs] 35.776: [Full GC (Metadata GC Threshold) [PSYoungGen: 20886K->0K(7340032K)] [ParOldGen: 12470K->24955K(16777216K)] 33357K->24955K(24117248K), [Metaspace: 41071K->41071K(1085440K)], 0.2084211 secs] [Times: user=1.18 sys=0.02, real=0.21 secs] 12:13:43,979 INFO [org.radargun.stages.lifecycle.ServiceStartStage] (sc-main) Successfully started cache service spark/driver on slave 1 12:13:43,980 INFO [org.radargun.Slave] (sc-main) Finished stage ServiceStart 12:13:43,982 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:13:43,999 INFO [org.radargun.Slave] (sc-main) Stage should not be executed 12:13:44,000 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:14:26,067 INFO [org.radargun.Slave] (sc-main) Stage should not be executed 12:14:26,069 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:14:34,993 INFO [org.radargun.Slave] (sc-main) Starting stage Define 12:14:34,993 INFO [org.radargun.Slave] (sc-main) Finished stage Define 12:14:34,994 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:14:35,052 INFO [org.radargun.Slave] (sc-main) Stage should not be executed 12:14:35,053 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 12:35:50,324 INFO [org.radargun.Slave] (sc-main) Starting stage MapReduce 12:35:50,336 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 12:35:50,337 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) org.radargun.service.SparkMapReduce does not support MapReducer.setDistributeReducePhase() 12:35:50,337 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) org.radargun.service.SparkMapReduce does not support MapReducer.setUseIntermediateSharedCache() 12:35:50,337 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) org.radargun.service.SparkMapReduce does not support MapReducer.setTimeout() 12:35:50,337 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) org.radargun.service.SparkMapReduce does not support MapReducer.setCombiner() [Stage 0:> (0 + 0) / 64] [Stage 0:> (0 + 64) / 64] [Stage 0:> (1 + 63) / 64] [Stage 0:==============> (16 + 48) / 64] [Stage 0:=====================> (24 + 40) / 64] [Stage 0:==========================================> (48 + 16) / 64] [Stage 0:=================================================> (56 + 8) / 64] 12:36:16,464 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 12:36:16,468 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 12:36:16,472 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 12:36:16,474 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 12:36:16,476 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 12:36:16,478 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 12:36:16,480 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 12:36:16,482 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 12:36:16,490 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 12:36:16,532 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 1:> (0 + 0) / 256] [Stage 1:> (0 + 64) / 256] [Stage 1:> (1 + 64) / 256] [Stage 1:> (2 + 64) / 256] [Stage 1:> (3 + 64) / 256] [Stage 1:> (4 + 64) / 256] [Stage 1:=> (5 + 64) / 256] [Stage 1:=> (5 + 65) / 256] [Stage 1:=> (8 + 64) / 256] [Stage 1:=> (9 + 64) / 256] [Stage 1:==> (11 + 64) / 256] [Stage 1:==> (13 + 64) / 256] [Stage 1:===> (17 + 64) / 256] [Stage 1:====> (21 + 64) / 256] [Stage 1:====> (23 + 64) / 256] [Stage 1:=====> (27 + 64) / 256] [Stage 1:======> (28 + 64) / 256] [Stage 1:======> (29 + 64) / 256] [Stage 1:======> (32 + 64) / 256] [Stage 1:=======> (33 + 64) / 256] [Stage 1:=======> (36 + 64) / 256] [Stage 1:========> (38 + 64) / 256] [Stage 1:========> (39 + 64) / 256] [Stage 1:========> (40 + 64) / 256] [Stage 1:========> (41 + 64) / 256] [Stage 1:=========> (43 + 64) / 256] [Stage 1:=========> (44 + 64) / 256] [Stage 1:=========> (46 + 64) / 256] [Stage 1:==========> (47 + 64) / 256] [Stage 1:==========> (48 + 64) / 256] [Stage 1:==========> (49 + 64) / 256] [Stage 1:==========> (51 + 64) / 256] [Stage 1:===========> (54 + 64) / 256] [Stage 1:===========> (55 + 64) / 256] [Stage 1:============> (56 + 64) / 256] [Stage 1:============> (57 + 64) / 256] [Stage 1:============> (59 + 64) / 256] [Stage 1:============> (60 + 64) / 256] [Stage 1:=============> (61 + 64) / 256] [Stage 1:=============> (62 + 64) / 256] [Stage 1:=============> (65 + 64) / 256] [Stage 1:==============> (67 + 64) / 256] [Stage 1:==============> (68 + 64) / 256] [Stage 1:===============> (70 + 64) / 256] [Stage 1:===============> (72 + 64) / 256] [Stage 1:===============> (72 + 65) / 256] [Stage 1:===============> (73 + 64) / 256] [Stage 1:===============> (74 + 64) / 256] [Stage 1:================> (76 + 64) / 256] [Stage 1:================> (78 + 64) / 256] [Stage 1:================> (79 + 64) / 256] [Stage 1:=================> (80 + 64) / 256] [Stage 1:=================> (81 + 64) / 256] [Stage 1:=================> (83 + 64) / 256] [Stage 1:==================> (84 + 64) / 256] [Stage 1:==================> (86 + 64) / 256] [Stage 1:==================> (87 + 64) / 256] [Stage 1:===================> (89 + 64) / 256] [Stage 1:===================> (93 + 64) / 256] [Stage 1:====================> (96 + 64) / 256] [Stage 1:====================> (97 + 64) / 256] [Stage 1:=====================> (98 + 64) / 256] [Stage 1:=====================> (99 + 64) / 256] [Stage 1:=====================> (101 + 64) / 256] [Stage 1:======================> (105 + 64) / 256] [Stage 1:======================> (106 + 64) / 256] [Stage 1:======================> (107 + 64) / 256] [Stage 1:======================> (108 + 64) / 256] [Stage 1:======================> (108 + 65) / 256] [Stage 1:======================> (109 + 64) / 256] [Stage 1:=======================> (112 + 64) / 256] [Stage 1:========================> (114 + 65) / 256] [Stage 1:========================> (115 + 64) / 256] [Stage 1:========================> (116 + 64) / 256] [Stage 1:========================> (117 + 64) / 256] [Stage 1:========================> (118 + 64) / 256] [Stage 1:=========================> (119 + 64) / 256] [Stage 1:=========================> (120 + 64) / 256] [Stage 1:=========================> (121 + 64) / 256] [Stage 1:=========================> (122 + 64) / 256] [Stage 1:=========================> (123 + 64) / 256] [Stage 1:==========================> (124 + 64) / 256] [Stage 1:==========================> (125 + 64) / 256] [Stage 1:==========================> (126 + 64) / 256] [Stage 1:==========================> (127 + 64) / 256] [Stage 1:===========================> (128 + 64) / 256] [Stage 1:===========================> (129 + 64) / 256] [Stage 1:===========================> (132 + 64) / 256] [Stage 1:============================> (133 + 64) / 256] [Stage 1:============================> (134 + 64) / 256] [Stage 1:============================> (135 + 64) / 256] [Stage 1:============================> (136 + 64) / 256] [Stage 1:============================> (137 + 64) / 256] [Stage 1:=============================> (138 + 64) / 256] [Stage 1:=============================> (139 + 64) / 256] [Stage 1:=============================> (142 + 64) / 256] [Stage 1:==============================> (143 + 64) / 256] [Stage 1:==============================> (144 + 64) / 256] [Stage 1:===============================> (147 + 64) / 256] [Stage 1:===============================> (148 + 64) / 256] [Stage 1:===============================> (150 + 64) / 256] [Stage 1:================================> (152 + 64) / 256] [Stage 1:================================> (153 + 64) / 256] [Stage 1:================================> (156 + 64) / 256] [Stage 1:=================================> (157 + 64) / 256] [Stage 1:=================================> (158 + 64) / 256] [Stage 1:=================================> (160 + 64) / 256] [Stage 1:==================================> (162 + 64) / 256] [Stage 1:==================================> (164 + 64) / 256] [Stage 1:==================================> (165 + 64) / 256] [Stage 1:===================================> (167 + 64) / 256] [Stage 1:===================================> (168 + 64) / 256] [Stage 1:===================================> (169 + 64) / 256] [Stage 1:===================================> (170 + 64) / 256] [Stage 1:====================================> (171 + 64) / 256] [Stage 1:====================================> (172 + 64) / 256] [Stage 1:====================================> (173 + 64) / 256] [Stage 1:====================================> (175 + 64) / 256] [Stage 1:=====================================> (176 + 63) / 256] [Stage 1:=====================================> (177 + 62) / 256] [Stage 1:=====================================> (178 + 62) / 256] [Stage 1:=====================================> (179 + 61) / 256] [Stage 1:=====================================> (180 + 60) / 256] [Stage 1:======================================> (181 + 61) / 256] [Stage 1:======================================> (181 + 64) / 256] [Stage 1:======================================> (182 + 64) / 256] [Stage 1:======================================> (183 + 63) / 256] [Stage 1:======================================> (184 + 62) / 256] [Stage 1:=======================================> (185 + 61) / 256] [Stage 1:=======================================> (186 + 60) / 256] [Stage 1:=======================================> (187 + 59) / 256] [Stage 1:=======================================> (188 + 58) / 256] [Stage 1:=======================================> (189 + 57) / 256] [Stage 1:========================================> (190 + 60) / 256] [Stage 1:========================================> (191 + 60) / 256] [Stage 1:========================================> (191 + 64) / 256] [Stage 1:========================================> (192 + 64) / 256] [Stage 1:========================================> (194 + 62) / 256] [Stage 1:=========================================> (196 + 60) / 256] [Stage 1:=========================================> (197 + 59) / 256] [Stage 1:=========================================> (199 + 57) / 256] [Stage 1:==========================================> (200 + 56) / 256] [Stage 1:==========================================> (201 + 55) / 256] [Stage 1:==========================================> (202 + 54) / 256] [Stage 1:==========================================> (203 + 53) / 256] [Stage 1:===========================================> (204 + 52) / 256] [Stage 1:===========================================> (205 + 51) / 256] [Stage 1:===========================================> (206 + 50) / 256] [Stage 1:===========================================> (207 + 49) / 256] [Stage 1:============================================> (209 + 47) / 256] [Stage 1:============================================> (210 + 46) / 256] [Stage 1:============================================> (211 + 45) / 256] [Stage 1:============================================> (213 + 43) / 256] [Stage 1:=============================================> (214 + 42) / 256] [Stage 1:=============================================> (215 + 41) / 256] [Stage 1:=============================================> (216 + 40) / 256] [Stage 1:=============================================> (217 + 39) / 256] [Stage 1:==============================================> (219 + 37) / 256] [Stage 1:==============================================> (220 + 36) / 256] [Stage 1:==============================================> (221 + 35) / 256] [Stage 1:==============================================> (222 + 34) / 256] [Stage 1:===============================================> (223 + 33) / 256] [Stage 1:===============================================> (225 + 31) / 256] [Stage 1:===============================================> (226 + 30) / 256] [Stage 1:===============================================> (227 + 29) / 256] [Stage 1:================================================> (228 + 28) / 256] [Stage 1:================================================> (229 + 27) / 256] [Stage 1:================================================> (230 + 26) / 256] [Stage 1:================================================> (231 + 25) / 256] [Stage 1:================================================> (232 + 24) / 256] [Stage 1:=================================================> (233 + 23) / 256] [Stage 1:=================================================> (234 + 22) / 256] [Stage 1:=================================================> (235 + 21) / 256] [Stage 1:=================================================> (236 + 20) / 256] [Stage 1:=================================================> (237 + 19) / 256] [Stage 1:==================================================> (238 + 18) / 256] [Stage 1:==================================================> (239 + 17) / 256] [Stage 1:==================================================> (240 + 16) / 256] [Stage 1:==================================================> (241 + 15) / 256] [Stage 1:===================================================> (242 + 14) / 256] [Stage 1:===================================================> (243 + 13) / 256] [Stage 1:===================================================> (244 + 12) / 256] [Stage 1:===================================================> (245 + 11) / 256] [Stage 1:===================================================> (246 + 10) / 256] [Stage 1:=====================================================> (247 + 9) / 256] [Stage 1:=====================================================> (248 + 8) / 256] [Stage 1:=====================================================> (249 + 7) / 256] [Stage 1:=====================================================> (250 + 6) / 256] [Stage 1:=====================================================> (251 + 5) / 256] [Stage 1:======================================================>(252 + 4) / 256] [Stage 1:======================================================>(254 + 2) / 256] [Stage 1:======================================================>(255 + 1) / 256] [Stage 2:====================> (95 + 64) / 256] [Stage 2:====================================> (173 + 67) / 256] [Stage 2:=================================================> (236 + 20) / 256] [Stage 2:===================================================> (243 + 13) / 256] [Stage 2:======================================================>(253 + 3) / 256] 12:38:37,200 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) MapReduce task completed in 141.17 seconds 12:38:37,201 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) Result map contains '19' keys. 12:38:37,202 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 12:38:37,233 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 12:38:37,234 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 12:38:37,235 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 12:38:37,236 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 12:38:37,237 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 12:38:37,238 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 12:38:37,238 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 12:38:37,239 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 12:38:37,240 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 12:38:37,247 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 3:> (0 + 64) / 256] [Stage 3:> (1 + 64) / 256] [Stage 3:> (3 + 64) / 256] [Stage 3:> (4 + 64) / 256] [Stage 3:=> (5 + 64) / 256] [Stage 3:=> (6 + 64) / 256] [Stage 3:==> (11 + 64) / 256] [Stage 3:==> (13 + 64) / 256] [Stage 3:===> (14 + 64) / 256] [Stage 3:===> (15 + 64) / 256] [Stage 3:===> (16 + 64) / 256] [Stage 3:===> (18 + 64) / 256] [Stage 3:====> (22 + 64) / 256] [Stage 3:=====> (27 + 64) / 256] [Stage 3:======> (30 + 64) / 256] [Stage 3:=======> (33 + 64) / 256] [Stage 3:=======> (36 + 64) / 256] [Stage 3:=========> (42 + 64) / 256] [Stage 3:=========> (44 + 64) / 256] [Stage 3:=========> (45 + 64) / 256] [Stage 3:=========> (46 + 64) / 256] [Stage 3:==========> (47 + 64) / 256] [Stage 3:==========> (48 + 64) / 256] [Stage 3:==========> (49 + 64) / 256] [Stage 3:==========> (50 + 64) / 256] [Stage 3:==========> (51 + 64) / 256] [Stage 3:===========> (52 + 64) / 256] [Stage 3:===========> (55 + 64) / 256] [Stage 3:============> (56 + 64) / 256] [Stage 3:============> (57 + 64) / 256] [Stage 3:============> (58 + 64) / 256] [Stage 3:============> (59 + 64) / 256] [Stage 3:============> (60 + 64) / 256] [Stage 3:=============> (61 + 64) / 256] [Stage 3:=============> (63 + 64) / 256] [Stage 3:=============> (64 + 64) / 256] [Stage 3:==============> (66 + 64) / 256] [Stage 3:==============> (67 + 64) / 256] [Stage 3:==============> (68 + 64) / 256] [Stage 3:===============> (70 + 64) / 256] [Stage 3:===============> (71 + 64) / 256] [Stage 3:===============> (72 + 64) / 256] [Stage 3:================> (77 + 64) / 256] [Stage 3:================> (79 + 64) / 256] [Stage 3:=================> (80 + 64) / 256] [Stage 3:=================> (81 + 64) / 256] [Stage 3:=================> (82 + 64) / 256] [Stage 3:==================> (84 + 64) / 256] [Stage 3:==================> (86 + 64) / 256] [Stage 3:===================> (91 + 64) / 256] [Stage 3:===================> (92 + 64) / 256] [Stage 3:====================> (95 + 64) / 256] [Stage 3:====================> (97 + 64) / 256] [Stage 3:=====================> (99 + 64) / 256] [Stage 3:=====================> (100 + 64) / 256] [Stage 3:=====================> (101 + 64) / 256] [Stage 3:=====================> (104 + 64) / 256] [Stage 3:======================> (105 + 64) / 256] [Stage 3:======================> (106 + 64) / 256] [Stage 3:======================> (108 + 64) / 256] [Stage 3:======================> (109 + 64) / 256] [Stage 3:=======================> (110 + 64) / 256] [Stage 3:=======================> (112 + 64) / 256] [Stage 3:=======================> (113 + 64) / 256] [Stage 3:========================> (114 + 65) / 256] [Stage 3:========================> (115 + 64) / 256] [Stage 3:========================> (117 + 64) / 256] [Stage 3:========================> (118 + 64) / 256] [Stage 3:=========================> (119 + 65) / 256] [Stage 3:=========================> (121 + 64) / 256] [Stage 3:=========================> (122 + 64) / 256] [Stage 3:=========================> (123 + 64) / 256] [Stage 3:==========================> (124 + 64) / 256] [Stage 3:==========================> (127 + 64) / 256] [Stage 3:===========================> (128 + 64) / 256] [Stage 3:===========================> (129 + 64) / 256] [Stage 3:===========================> (130 + 64) / 256] [Stage 3:===========================> (131 + 64) / 256] [Stage 3:============================> (133 + 64) / 256] [Stage 3:============================> (135 + 64) / 256] [Stage 3:=============================> (138 + 64) / 256] [Stage 3:=============================> (139 + 64) / 256] [Stage 3:=============================> (140 + 64) / 256] [Stage 3:=============================> (142 + 64) / 256] [Stage 3:==============================> (143 + 64) / 256] [Stage 3:==============================> (146 + 64) / 256] [Stage 3:===============================> (148 + 64) / 256] [Stage 3:===============================> (150 + 64) / 256] [Stage 3:================================> (154 + 64) / 256] [Stage 3:================================> (156 + 64) / 256] [Stage 3:=================================> (158 + 64) / 256] [Stage 3:=================================> (161 + 64) / 256] [Stage 3:==================================> (162 + 64) / 256] [Stage 3:==================================> (163 + 64) / 256] [Stage 3:===================================> (166 + 64) / 256] [Stage 3:===================================> (167 + 64) / 256] [Stage 3:===================================> (168 + 64) / 256] [Stage 3:===================================> (169 + 64) / 256] [Stage 3:===================================> (170 + 64) / 256] [Stage 3:====================================> (171 + 64) / 256] [Stage 3:====================================> (174 + 64) / 256] [Stage 3:====================================> (175 + 64) / 256] [Stage 3:=====================================> (176 + 64) / 256] [Stage 3:=====================================> (177 + 63) / 256] [Stage 3:=====================================> (179 + 62) / 256] [Stage 3:======================================> (181 + 61) / 256] [Stage 3:======================================> (182 + 61) / 256] [Stage 3:======================================> (183 + 61) / 256] [Stage 3:=======================================> (185 + 59) / 256] [Stage 3:=======================================> (186 + 58) / 256] [Stage 3:=======================================> (187 + 57) / 256] [Stage 3:=======================================> (188 + 56) / 256] [Stage 3:========================================> (190 + 55) / 256] [Stage 3:========================================> (192 + 54) / 256] [Stage 3:========================================> (193 + 54) / 256] [Stage 3:=========================================> (195 + 52) / 256] [Stage 3:=========================================> (197 + 50) / 256] [Stage 3:=========================================> (198 + 49) / 256] [Stage 3:=========================================> (199 + 48) / 256] [Stage 3:==========================================> (200 + 47) / 256] [Stage 3:==========================================> (202 + 45) / 256] [Stage 3:===========================================> (204 + 43) / 256] [Stage 3:===========================================> (205 + 51) / 256] [Stage 3:===========================================> (208 + 48) / 256] [Stage 3:============================================> (209 + 47) / 256] [Stage 3:============================================> (211 + 45) / 256] [Stage 3:============================================> (212 + 44) / 256] [Stage 3:=============================================> (214 + 42) / 256] [Stage 3:=============================================> (215 + 41) / 256] [Stage 3:=============================================> (217 + 39) / 256] [Stage 3:=============================================> (218 + 38) / 256] [Stage 3:==============================================> (220 + 36) / 256] [Stage 3:==============================================> (221 + 35) / 256] [Stage 3:===============================================> (223 + 33) / 256] [Stage 3:===============================================> (226 + 30) / 256] [Stage 3:===============================================> (227 + 29) / 256] [Stage 3:================================================> (228 + 28) / 256] [Stage 3:================================================> (229 + 27) / 256] [Stage 3:================================================> (230 + 26) / 256] [Stage 3:================================================> (231 + 25) / 256] [Stage 3:================================================> (232 + 24) / 256] [Stage 3:=================================================> (233 + 23) / 256] [Stage 3:=================================================> (234 + 22) / 256] [Stage 3:=================================================> (235 + 21) / 256] [Stage 3:=================================================> (236 + 20) / 256] [Stage 3:=================================================> (237 + 19) / 256] [Stage 3:==================================================> (238 + 18) / 256] [Stage 3:==================================================> (239 + 17) / 256] [Stage 3:==================================================> (240 + 16) / 256] [Stage 3:==================================================> (241 + 15) / 256] [Stage 3:===================================================> (242 + 14) / 256] [Stage 3:===================================================> (244 + 12) / 256] [Stage 3:===================================================> (245 + 11) / 256] [Stage 3:=====================================================> (247 + 9) / 256] [Stage 3:=====================================================> (248 + 8) / 256] [Stage 3:=====================================================> (249 + 7) / 256] [Stage 3:=====================================================> (250 + 6) / 256] [Stage 3:=====================================================> (251 + 5) / 256] [Stage 3:======================================================>(252 + 4) / 256] [Stage 3:======================================================>(253 + 3) / 256] [Stage 3:======================================================>(254 + 2) / 256] [Stage 4:> (0 + 0) / 256] 12:40:44,471 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) MapReduce task completed in 127.27 seconds 12:40:44,471 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) Result map contains '19' keys. 12:40:44,471 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 12:40:44,472 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 1: Got the same results for two Map/Reduce runs 12:40:44,498 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 12:40:44,498 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 12:40:44,499 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 12:40:44,500 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 12:40:44,501 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 12:40:44,502 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 12:40:44,503 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 12:40:44,504 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 12:40:44,505 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 12:40:44,512 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 5:> (0 + 64) / 256] [Stage 5:> (3 + 64) / 256] [Stage 5:> (4 + 64) / 256] [Stage 5:=> (8 + 64) / 256] [Stage 5:=> (9 + 64) / 256] [Stage 5:===> (14 + 64) / 256] [Stage 5:===> (18 + 64) / 256] [Stage 5:=====> (24 + 64) / 256] [Stage 5:=====> (27 + 64) / 256] [Stage 5:=======> (33 + 64) / 256] [Stage 5:========> (40 + 64) / 256] [Stage 5:=========> (43 + 64) / 256] [Stage 5:=========> (44 + 65) / 256] [Stage 5:=========> (46 + 64) / 256] [Stage 5:==========> (47 + 64) / 256] [Stage 5:==========> (48 + 64) / 256] [Stage 5:==========> (49 + 64) / 256] [Stage 5:==========> (50 + 64) / 256] [Stage 5:==========> (51 + 64) / 256] [Stage 5:===========> (53 + 64) / 256] [Stage 5:===========> (55 + 64) / 256] [Stage 5:============> (56 + 64) / 256] [Stage 5:============> (57 + 64) / 256] [Stage 5:============> (58 + 64) / 256] [Stage 5:============> (60 + 64) / 256] [Stage 5:=============> (61 + 64) / 256] [Stage 5:=============> (62 + 64) / 256] [Stage 5:=============> (63 + 64) / 256] [Stage 5:=============> (65 + 64) / 256] [Stage 5:==============> (66 + 64) / 256] [Stage 5:==============> (68 + 64) / 256] [Stage 5:===============> (70 + 64) / 256] [Stage 5:===============> (74 + 64) / 256] [Stage 5:================> (76 + 64) / 256] [Stage 5:================> (77 + 64) / 256] [Stage 5:================> (78 + 64) / 256] [Stage 5:=================> (82 + 64) / 256] [Stage 5:==================> (85 + 64) / 256] [Stage 5:===================> (90 + 64) / 256] [Stage 5:===================> (93 + 64) / 256] [Stage 5:====================> (94 + 64) / 256] [Stage 5:====================> (97 + 64) / 256] [Stage 5:=====================> (99 + 64) / 256] [Stage 5:=====================> (102 + 64) / 256] [Stage 5:======================> (105 + 64) / 256] [Stage 5:======================> (107 + 65) / 256] [Stage 5:=======================> (110 + 64) / 256] [Stage 5:=======================> (111 + 64) / 256] [Stage 5:=======================> (112 + 64) / 256] [Stage 5:=======================> (113 + 64) / 256] [Stage 5:========================> (114 + 64) / 256] [Stage 5:========================> (115 + 64) / 256] [Stage 5:========================> (115 + 65) / 256] [Stage 5:========================> (116 + 64) / 256] [Stage 5:========================> (117 + 64) / 256] [Stage 5:========================> (118 + 64) / 256] [Stage 5:=========================> (120 + 64) / 256] [Stage 5:=========================> (121 + 64) / 256] [Stage 5:=========================> (122 + 64) / 256] [Stage 5:==========================> (124 + 64) / 256] [Stage 5:==========================> (125 + 64) / 256] [Stage 5:==========================> (127 + 64) / 256] [Stage 5:===========================> (128 + 64) / 256] [Stage 5:===========================> (130 + 64) / 256] [Stage 5:===========================> (132 + 64) / 256] [Stage 5:============================> (134 + 64) / 256] [Stage 5:============================> (136 + 64) / 256] [Stage 5:============================> (137 + 64) / 256] [Stage 5:=============================> (139 + 64) / 256] [Stage 5:=============================> (140 + 64) / 256] [Stage 5:=============================> (142 + 64) / 256] [Stage 5:==============================> (144 + 64) / 256] [Stage 5:===============================> (148 + 64) / 256] [Stage 5:===============================> (150 + 64) / 256] [Stage 5:===============================> (151 + 64) / 256] [Stage 5:================================> (152 + 64) / 256] [Stage 5:=================================> (157 + 64) / 256] [Stage 5:=================================> (161 + 64) / 256] [Stage 5:==================================> (164 + 64) / 256] [Stage 5:===================================> (166 + 64) / 256] [Stage 5:===================================> (167 + 64) / 256] [Stage 5:===================================> (168 + 64) / 256] [Stage 5:===================================> (169 + 64) / 256] [Stage 5:===================================> (170 + 64) / 256] [Stage 5:====================================> (171 + 64) / 256] [Stage 5:====================================> (173 + 64) / 256] [Stage 5:====================================> (174 + 64) / 256] [Stage 5:=====================================> (176 + 64) / 256] [Stage 5:=====================================> (177 + 64) / 256] [Stage 5:=====================================> (178 + 63) / 256] [Stage 5:=====================================> (179 + 63) / 256] [Stage 5:=====================================> (180 + 62) / 256] [Stage 5:======================================> (181 + 61) / 256] [Stage 5:======================================> (182 + 60) / 256] [Stage 5:======================================> (184 + 58) / 256] [Stage 5:=======================================> (186 + 56) / 256] [Stage 5:=======================================> (186 + 64) / 256] [Stage 5:=======================================> (187 + 64) / 256] [Stage 5:=======================================> (188 + 64) / 256] [Stage 5:=======================================> (189 + 64) / 256] [Stage 5:========================================> (190 + 64) / 256] [Stage 5:========================================> (192 + 64) / 256] [Stage 5:========================================> (193 + 63) / 256] [Stage 5:=========================================> (196 + 60) / 256] [Stage 5:=========================================> (199 + 57) / 256] [Stage 5:==========================================> (201 + 55) / 256] [Stage 5:==========================================> (202 + 54) / 256] [Stage 5:==========================================> (203 + 53) / 256] [Stage 5:===========================================> (204 + 52) / 256] [Stage 5:===========================================> (206 + 50) / 256] [Stage 5:============================================> (212 + 44) / 256] [Stage 5:=============================================> (215 + 41) / 256] [Stage 5:=============================================> (218 + 38) / 256] [Stage 5:==============================================> (221 + 35) / 256] [Stage 5:===============================================> (223 + 33) / 256] [Stage 5:===============================================> (224 + 32) / 256] [Stage 5:===============================================> (226 + 30) / 256] [Stage 5:================================================> (229 + 27) / 256] [Stage 5:================================================> (230 + 26) / 256] [Stage 5:================================================> (231 + 25) / 256] [Stage 5:================================================> (232 + 24) / 256] [Stage 5:=================================================> (233 + 23) / 256] [Stage 5:=================================================> (234 + 22) / 256] [Stage 5:=================================================> (235 + 21) / 256] [Stage 5:=================================================> (236 + 20) / 256] [Stage 5:=================================================> (237 + 19) / 256] [Stage 5:==================================================> (239 + 17) / 256] [Stage 5:==================================================> (240 + 16) / 256] [Stage 5:==================================================> (241 + 15) / 256] [Stage 5:===================================================> (242 + 14) / 256] [Stage 5:===================================================> (243 + 13) / 256] [Stage 5:===================================================> (244 + 12) / 256] [Stage 5:===================================================> (245 + 11) / 256] [Stage 5:===================================================> (246 + 10) / 256] [Stage 5:=====================================================> (247 + 9) / 256] [Stage 5:=====================================================> (248 + 8) / 256] [Stage 5:=====================================================> (249 + 7) / 256] [Stage 5:=====================================================> (250 + 6) / 256] [Stage 5:=====================================================> (251 + 5) / 256] [Stage 5:======================================================>(252 + 4) / 256] [Stage 5:======================================================>(253 + 3) / 256] [Stage 5:======================================================>(254 + 2) / 256] [Stage 5:======================================================>(255 + 1) / 256] 12:43:01,893 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) MapReduce task completed in 137.42 seconds 12:43:01,893 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) Result map contains '19' keys. 12:43:01,894 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 12:43:01,894 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 2: Got the same results for two Map/Reduce runs 12:43:01,921 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 12:43:01,922 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 12:43:01,923 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 12:43:01,924 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 12:43:01,925 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 12:43:01,926 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 12:43:01,927 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 12:43:01,927 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 12:43:01,929 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 12:43:01,934 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 7:> (0 + 64) / 256] [Stage 7:> (2 + 64) / 256] [Stage 7:=> (5 + 64) / 256] [Stage 7:=> (9 + 64) / 256] [Stage 7:==> (10 + 64) / 256] [Stage 7:===> (14 + 64) / 256] [Stage 7:===> (15 + 64) / 256] [Stage 7:===> (17 + 64) / 256] [Stage 7:====> (22 + 64) / 256] [Stage 7:=====> (25 + 64) / 256] [Stage 7:======> (30 + 64) / 256] [Stage 7:=======> (37 + 64) / 256] [Stage 7:========> (39 + 64) / 256] [Stage 7:========> (41 + 64) / 256] [Stage 7:=========> (44 + 64) / 256] [Stage 7:=========> (46 + 64) / 256] [Stage 7:==========> (48 + 64) / 256] [Stage 7:==========> (49 + 64) / 256] [Stage 7:==========> (50 + 64) / 256] [Stage 7:===========> (52 + 64) / 256] [Stage 7:===========> (54 + 64) / 256] [Stage 7:===========> (55 + 64) / 256] [Stage 7:============> (56 + 64) / 256] [Stage 7:============> (57 + 64) / 256] [Stage 7:============> (58 + 64) / 256] [Stage 7:============> (60 + 64) / 256] [Stage 7:=============> (61 + 64) / 256] [Stage 7:=============> (62 + 64) / 256] [Stage 7:=============> (63 + 64) / 256] [Stage 7:=============> (65 + 64) / 256] [Stage 7:==============> (68 + 64) / 256] [Stage 7:===============> (70 + 64) / 256] [Stage 7:===============> (73 + 64) / 256] [Stage 7:================> (77 + 64) / 256] [Stage 7:================> (78 + 64) / 256] [Stage 7:=================> (80 + 64) / 256] [Stage 7:=================> (81 + 65) / 256] [Stage 7:==================> (84 + 64) / 256] [Stage 7:==================> (86 + 64) / 256] [Stage 7:===================> (89 + 64) / 256] [Stage 7:===================> (93 + 64) / 256] [Stage 7:====================> (94 + 64) / 256] [Stage 7:=====================> (99 + 64) / 256] [Stage 7:=====================> (102 + 64) / 256] [Stage 7:=====================> (103 + 64) / 256] [Stage 7:=====================> (104 + 64) / 256] [Stage 7:======================> (106 + 64) / 256] [Stage 7:======================> (108 + 64) / 256] [Stage 7:======================> (109 + 64) / 256] [Stage 7:=======================> (110 + 64) / 256] [Stage 7:=======================> (111 + 64) / 256] [Stage 7:=======================> (112 + 64) / 256] [Stage 7:=======================> (113 + 64) / 256] [Stage 7:========================> (114 + 64) / 256] [Stage 7:========================> (115 + 64) / 256] [Stage 7:========================> (116 + 64) / 256] [Stage 7:========================> (117 + 64) / 256] [Stage 7:========================> (118 + 64) / 256] [Stage 7:=========================> (119 + 64) / 256] [Stage 7:=========================> (120 + 64) / 256] [Stage 7:=========================> (121 + 64) / 256] [Stage 7:=========================> (121 + 65) / 256] [Stage 7:=========================> (122 + 64) / 256] [Stage 7:=========================> (123 + 64) / 256] [Stage 7:==========================> (124 + 64) / 256] [Stage 7:==========================> (125 + 64) / 256] [Stage 7:==========================> (126 + 64) / 256] [Stage 7:==========================> (127 + 64) / 256] [Stage 7:===========================> (128 + 64) / 256] [Stage 7:===========================> (129 + 64) / 256] [Stage 7:===========================> (129 + 65) / 256] [Stage 7:===========================> (132 + 65) / 256] [Stage 7:============================> (134 + 64) / 256] [Stage 7:============================> (135 + 64) / 256] [Stage 7:============================> (136 + 64) / 256] [Stage 7:=============================> (138 + 64) / 256] [Stage 7:=============================> (142 + 64) / 256] [Stage 7:===============================> (148 + 64) / 256] [Stage 7:===============================> (150 + 64) / 256] [Stage 7:================================> (152 + 64) / 256] [Stage 7:================================> (153 + 64) / 256] [Stage 7:================================> (154 + 64) / 256] [Stage 7:================================> (155 + 64) / 256] [Stage 7:================================> (156 + 64) / 256] [Stage 7:=================================> (159 + 64) / 256] [Stage 7:==================================> (163 + 64) / 256] [Stage 7:==================================> (164 + 64) / 256] [Stage 7:==================================> (165 + 64) / 256] [Stage 7:===================================> (166 + 64) / 256] [Stage 7:===================================> (168 + 64) / 256] [Stage 7:===================================> (169 + 64) / 256] [Stage 7:===================================> (170 + 64) / 256] [Stage 7:====================================> (171 + 64) / 256] [Stage 7:====================================> (172 + 63) / 256] [Stage 7:====================================> (173 + 62) / 256] [Stage 7:====================================> (175 + 60) / 256] [Stage 7:=====================================> (176 + 59) / 256] [Stage 7:=====================================> (177 + 59) / 256] [Stage 7:=====================================> (178 + 58) / 256] [Stage 7:=====================================> (179 + 57) / 256] [Stage 7:======================================> (181 + 55) / 256] [Stage 7:======================================> (183 + 54) / 256] [Stage 7:======================================> (184 + 53) / 256] [Stage 7:=======================================> (185 + 53) / 256] [Stage 7:=======================================> (186 + 53) / 256] [Stage 7:=======================================> (187 + 52) / 256] [Stage 7:=======================================> (189 + 50) / 256] [Stage 7:========================================> (190 + 50) / 256] [Stage 7:========================================> (191 + 49) / 256] [Stage 7:========================================> (192 + 49) / 256] [Stage 7:========================================> (193 + 49) / 256] [Stage 7:=========================================> (195 + 48) / 256] [Stage 7:=========================================> (196 + 47) / 256] [Stage 7:=========================================> (197 + 46) / 256] [Stage 7:=========================================> (199 + 44) / 256] [Stage 7:==========================================> (200 + 44) / 256] [Stage 7:==========================================> (201 + 43) / 256] [Stage 7:===========================================> (204 + 40) / 256] [Stage 7:===========================================> (206 + 38) / 256] [Stage 7:===========================================> (207 + 37) / 256] [Stage 7:============================================> (211 + 34) / 256] [Stage 7:============================================> (212 + 33) / 256] [Stage 7:============================================> (213 + 32) / 256] [Stage 7:=============================================> (215 + 30) / 256] [Stage 7:=============================================> (216 + 29) / 256] [Stage 7:=============================================> (218 + 27) / 256] [Stage 7:==============================================> (221 + 24) / 256] [Stage 7:==============================================> (222 + 23) / 256] [Stage 7:===============================================> (225 + 20) / 256] [Stage 7:===============================================> (227 + 19) / 256] [Stage 7:================================================> (228 + 18) / 256] [Stage 7:================================================> (229 + 17) / 256] [Stage 7:================================================> (230 + 17) / 256] [Stage 7:================================================> (231 + 17) / 256] [Stage 7:================================================> (232 + 17) / 256] [Stage 7:================================================> (232 + 24) / 256] [Stage 7:=================================================> (233 + 23) / 256] [Stage 7:=================================================> (234 + 22) / 256] [Stage 7:=================================================> (235 + 21) / 256] [Stage 7:=================================================> (236 + 20) / 256] [Stage 7:=================================================> (237 + 19) / 256] [Stage 7:==================================================> (238 + 18) / 256] [Stage 7:==================================================> (239 + 17) / 256] [Stage 7:==================================================> (240 + 16) / 256] [Stage 7:===================================================> (242 + 14) / 256] [Stage 7:===================================================> (243 + 13) / 256] [Stage 7:===================================================> (244 + 12) / 256] [Stage 7:===================================================> (245 + 11) / 256] [Stage 7:===================================================> (246 + 10) / 256] [Stage 7:=====================================================> (247 + 9) / 256] [Stage 7:=====================================================> (249 + 7) / 256] [Stage 7:=====================================================> (250 + 6) / 256] [Stage 7:=====================================================> (251 + 5) / 256] [Stage 7:======================================================>(252 + 4) / 256] [Stage 7:======================================================>(253 + 3) / 256] [Stage 7:======================================================>(254 + 2) / 256] 12:45:18,974 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) MapReduce task completed in 137.08 seconds 12:45:18,974 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) Result map contains '19' keys. 12:45:18,974 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 12:45:18,974 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 3: Got the same results for two Map/Reduce runs 12:45:19,002 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 12:45:19,002 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 12:45:19,003 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 12:45:19,004 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 12:45:19,005 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 12:45:19,006 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 12:45:19,006 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 12:45:19,007 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 12:45:19,008 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 12:45:19,013 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 9:> (0 + 64) / 256] [Stage 9:> (1 + 64) / 256] [Stage 9:> (2 + 64) / 256] [Stage 9:> (4 + 64) / 256] [Stage 9:==> (10 + 64) / 256] [Stage 9:==> (11 + 65) / 256] [Stage 9:===> (15 + 64) / 256] [Stage 9:===> (16 + 64) / 256] [Stage 9:===> (17 + 64) / 256] [Stage 9:====> (19 + 64) / 256] [Stage 9:======> (28 + 64) / 256] [Stage 9:======> (29 + 64) / 256] [Stage 9:=======> (33 + 64) / 256] [Stage 9:=======> (36 + 64) / 256] [Stage 9:========> (39 + 64) / 256] [Stage 9:=========> (42 + 64) / 256] [Stage 9:=========> (43 + 64) / 256] [Stage 9:=========> (45 + 64) / 256] [Stage 9:=========> (46 + 64) / 256] [Stage 9:==========> (47 + 64) / 256] [Stage 9:==========> (48 + 64) / 256] [Stage 9:==========> (49 + 64) / 256] [Stage 9:==========> (50 + 64) / 256] [Stage 9:==========> (51 + 64) / 256] [Stage 9:===========> (52 + 64) / 256] [Stage 9:===========> (54 + 64) / 256] [Stage 9:===========> (55 + 64) / 256] [Stage 9:============> (56 + 64) / 256] [Stage 9:============> (57 + 64) / 256] [Stage 9:============> (58 + 64) / 256] [Stage 9:============> (60 + 64) / 256] [Stage 9:=============> (61 + 64) / 256] [Stage 9:=============> (62 + 64) / 256] [Stage 9:=============> (63 + 64) / 256] [Stage 9:==============> (66 + 64) / 256] [Stage 9:==============> (69 + 64) / 256] [Stage 9:===============> (70 + 64) / 256] [Stage 9:===============> (71 + 64) / 256] [Stage 9:===============> (73 + 64) / 256] [Stage 9:================> (76 + 64) / 256] [Stage 9:================> (77 + 64) / 256] [Stage 9:=================> (80 + 64) / 256] [Stage 9:=================> (82 + 64) / 256] [Stage 9:==================> (85 + 64) / 256] [Stage 9:==================> (87 + 64) / 256] [Stage 9:===================> (89 + 64) / 256] [Stage 9:===================> (91 + 64) / 256] [Stage 9:===================> (92 + 64) / 256] [Stage 9:====================> (94 + 64) / 256] [Stage 9:====================> (96 + 64) / 256] [Stage 9:====================> (97 + 64) / 256] [Stage 9:=====================> (98 + 64) / 256] [Stage 9:=====================> (99 + 64) / 256] [Stage 9:=====================> (101 + 64) / 256] [Stage 9:=====================> (102 + 64) / 256] [Stage 9:=====================> (103 + 64) / 256] [Stage 9:=====================> (104 + 64) / 256] [Stage 9:======================> (105 + 64) / 256] [Stage 9:======================> (106 + 64) / 256] [Stage 9:======================> (107 + 64) / 256] [Stage 9:======================> (108 + 64) / 256] [Stage 9:======================> (109 + 64) / 256] [Stage 9:=======================> (110 + 64) / 256] [Stage 9:=======================> (111 + 64) / 256] [Stage 9:=======================> (112 + 64) / 256] [Stage 9:========================> (114 + 64) / 256] [Stage 9:========================> (115 + 64) / 256] [Stage 9:========================> (116 + 64) / 256] [Stage 9:========================> (117 + 64) / 256] [Stage 9:=========================> (119 + 64) / 256] [Stage 9:=========================> (120 + 64) / 256] [Stage 9:=========================> (122 + 64) / 256] [Stage 9:=========================> (123 + 64) / 256] [Stage 9:==========================> (125 + 64) / 256] [Stage 9:==========================> (126 + 64) / 256] [Stage 9:==========================> (127 + 64) / 256] [Stage 9:===========================> (129 + 64) / 256] [Stage 9:===========================> (131 + 64) / 256] [Stage 9:============================> (134 + 64) / 256] [Stage 9:============================> (135 + 65) / 256] [Stage 9:============================> (137 + 64) / 256] [Stage 9:=============================> (138 + 64) / 256] [Stage 9:=============================> (139 + 64) / 256] [Stage 9:=============================> (140 + 64) / 256] [Stage 9:=============================> (141 + 64) / 256] [Stage 9:=============================> (142 + 64) / 256] [Stage 9:==============================> (143 + 64) / 256] [Stage 9:==============================> (144 + 64) / 256] [Stage 9:===============================> (147 + 64) / 256] [Stage 9:===============================> (149 + 64) / 256] [Stage 9:===============================> (151 + 64) / 256] [Stage 9:================================> (153 + 64) / 256] [Stage 9:================================> (154 + 64) / 256] [Stage 9:================================> (155 + 64) / 256] [Stage 9:=================================> (157 + 64) / 256] [Stage 9:=================================> (159 + 64) / 256] [Stage 9:=================================> (161 + 64) / 256] [Stage 9:==================================> (162 + 64) / 256] [Stage 9:==================================> (163 + 64) / 256] [Stage 9:==================================> (164 + 64) / 256] [Stage 9:==================================> (165 + 64) / 256] [Stage 9:===================================> (166 + 64) / 256] [Stage 9:===================================> (169 + 64) / 256] [Stage 9:===================================> (170 + 63) / 256] [Stage 9:====================================> (171 + 63) / 256] [Stage 9:====================================> (172 + 63) / 256] [Stage 9:====================================> (173 + 62) / 256] [Stage 9:====================================> (174 + 62) / 256] [Stage 9:=====================================> (176 + 61) / 256] [Stage 9:=====================================> (177 + 61) / 256] [Stage 9:=====================================> (178 + 61) / 256] [Stage 9:=====================================> (179 + 60) / 256] [Stage 9:=====================================> (180 + 60) / 256] [Stage 9:======================================> (182 + 58) / 256] [Stage 9:======================================> (183 + 57) / 256] [Stage 9:======================================> (184 + 56) / 256] [Stage 9:=======================================> (185 + 56) / 256] [Stage 9:=======================================> (188 + 53) / 256] [Stage 9:========================================> (191 + 51) / 256] [Stage 9:========================================> (192 + 50) / 256] [Stage 9:========================================> (194 + 48) / 256] [Stage 9:=========================================> (197 + 45) / 256] [Stage 9:=========================================> (198 + 44) / 256] [Stage 9:==========================================> (200 + 43) / 256] [Stage 9:==========================================> (201 + 42) / 256] [Stage 9:==========================================> (202 + 41) / 256] [Stage 9:==========================================> (203 + 41) / 256] [Stage 9:===========================================> (204 + 41) / 256] [Stage 9:===========================================> (206 + 39) / 256] [Stage 9:===========================================> (207 + 38) / 256] [Stage 9:===========================================> (208 + 37) / 256] [Stage 9:============================================> (209 + 36) / 256] [Stage 9:============================================> (210 + 36) / 256] [Stage 9:============================================> (211 + 36) / 256] [Stage 9:============================================> (212 + 35) / 256] [Stage 9:============================================> (213 + 34) / 256] [Stage 9:=============================================> (216 + 33) / 256] [Stage 9:==============================================> (219 + 31) / 256] [Stage 9:==============================================> (220 + 31) / 256] [Stage 9:==============================================> (221 + 30) / 256] [Stage 9:==============================================> (222 + 29) / 256] [Stage 9:===============================================> (223 + 28) / 256] [Stage 9:===============================================> (224 + 27) / 256] [Stage 9:===============================================> (225 + 31) / 256] [Stage 9:===============================================> (227 + 29) / 256] [Stage 9:================================================> (229 + 27) / 256] [Stage 9:================================================> (230 + 26) / 256] [Stage 9:================================================> (231 + 25) / 256] [Stage 9:================================================> (232 + 24) / 256] [Stage 9:=================================================> (233 + 23) / 256] [Stage 9:=================================================> (234 + 22) / 256] [Stage 9:=================================================> (235 + 21) / 256] [Stage 9:=================================================> (236 + 20) / 256] [Stage 9:=================================================> (237 + 19) / 256] [Stage 9:==================================================> (238 + 18) / 256] [Stage 9:==================================================> (239 + 17) / 256] [Stage 9:==================================================> (240 + 16) / 256] [Stage 9:==================================================> (241 + 15) / 256] [Stage 9:===================================================> (242 + 14) / 256] [Stage 9:===================================================> (243 + 13) / 256] [Stage 9:===================================================> (244 + 12) / 256] [Stage 9:===================================================> (245 + 11) / 256] [Stage 9:===================================================> (246 + 10) / 256] [Stage 9:=====================================================> (247 + 9) / 256] [Stage 9:=====================================================> (248 + 8) / 256] [Stage 9:=====================================================> (249 + 7) / 256] [Stage 9:=====================================================> (250 + 6) / 256] [Stage 9:======================================================>(252 + 4) / 256] [Stage 9:======================================================>(253 + 3) / 256] [Stage 9:======================================================>(254 + 2) / 256] [Stage 9:======================================================>(255 + 1) / 256] 12:47:19,227 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) MapReduce task completed in 120.25 seconds 12:47:19,227 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) Result map contains '19' keys. 12:47:19,227 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 12:47:19,228 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 4: Got the same results for two Map/Reduce runs 12:47:19,259 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 12:47:19,259 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 12:47:19,260 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 12:47:19,262 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 12:47:19,263 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 12:47:19,264 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 12:47:19,265 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 12:47:19,266 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 12:47:19,267 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 12:47:19,273 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 11:> (0 + 64) / 256] [Stage 11:> (1 + 64) / 256] [Stage 11:> (2 + 64) / 256] [Stage 11:> (3 + 64) / 256] [Stage 11:=> (6 + 64) / 256] [Stage 11:=> (7 + 64) / 256] [Stage 11:=> (8 + 64) / 256] [Stage 11:==> (13 + 64) / 256] [Stage 11:==> (14 + 64) / 256] [Stage 11:===> (15 + 64) / 256] [Stage 11:===> (17 + 64) / 256] [Stage 11:====> (19 + 64) / 256] [Stage 11:====> (20 + 64) / 256] [Stage 11:====> (21 + 64) / 256] [Stage 11:====> (22 + 64) / 256] [Stage 11:====> (23 + 64) / 256] [Stage 11:=====> (24 + 64) / 256] [Stage 11:=====> (25 + 64) / 256] [Stage 11:=====> (26 + 64) / 256] [Stage 11:=====> (27 + 64) / 256] [Stage 11:=====> (28 + 64) / 256] [Stage 11:======> (30 + 64) / 256] [Stage 11:======> (31 + 64) / 256] [Stage 11:======> (32 + 64) / 256] [Stage 11:======> (33 + 64) / 256] [Stage 11:=======> (34 + 64) / 256] [Stage 11:=======> (35 + 64) / 256] [Stage 11:=======> (37 + 64) / 256] [Stage 11:========> (38 + 64) / 256] [Stage 11:========> (39 + 64) / 256] [Stage 11:========> (42 + 64) / 256] [Stage 11:=========> (43 + 64) / 256] [Stage 11:=========> (44 + 64) / 256] [Stage 11:=========> (45 + 64) / 256] [Stage 11:=========> (47 + 64) / 256] [Stage 11:==========> (49 + 64) / 256] [Stage 11:==========> (50 + 64) / 256] [Stage 11:==========> (51 + 64) / 256] [Stage 11:===========> (54 + 64) / 256] [Stage 11:===========> (55 + 64) / 256] [Stage 11:===========> (56 + 64) / 256] [Stage 11:===========> (56 + 65) / 256] [Stage 11:============> (59 + 64) / 256] [Stage 11:=============> (62 + 64) / 256] [Stage 11:=============> (64 + 64) / 256] [Stage 11:=============> (66 + 64) / 256] [Stage 11:==============> (67 + 64) / 256] [Stage 11:==============> (68 + 64) / 256] [Stage 11:==============> (71 + 64) / 256] [Stage 11:===============> (73 + 64) / 256] [Stage 11:===============> (75 + 64) / 256] [Stage 11:================> (76 + 64) / 256] [Stage 11:================> (79 + 64) / 256] [Stage 11:================> (80 + 64) / 256] [Stage 11:=================> (81 + 64) / 256] [Stage 11:=================> (83 + 64) / 256] [Stage 11:=================> (84 + 64) / 256] [Stage 11:=================> (85 + 64) / 256] [Stage 11:==================> (86 + 64) / 256] [Stage 11:==================> (87 + 64) / 256] [Stage 11:==================> (88 + 64) / 256] [Stage 11:===================> (92 + 64) / 256] [Stage 11:===================> (93 + 64) / 256] [Stage 11:====================> (95 + 64) / 256] [Stage 11:====================> (96 + 64) / 256] [Stage 11:====================> (97 + 64) / 256] [Stage 11:====================> (99 + 64) / 256] [Stage 11:====================> (100 + 64) / 256] [Stage 11:====================> (101 + 64) / 256] [Stage 11:=====================> (102 + 64) / 256] [Stage 11:=====================> (103 + 64) / 256] [Stage 11:=====================> (104 + 64) / 256] [Stage 11:======================> (107 + 64) / 256] [Stage 11:======================> (109 + 64) / 256] [Stage 11:======================> (110 + 64) / 256] [Stage 11:======================> (111 + 64) / 256] [Stage 11:=======================> (113 + 64) / 256] [Stage 11:=======================> (115 + 64) / 256] [Stage 11:========================> (117 + 64) / 256] [Stage 11:========================> (118 + 64) / 256] [Stage 11:========================> (119 + 64) / 256] [Stage 11:========================> (120 + 64) / 256] [Stage 11:=========================> (121 + 64) / 256] [Stage 11:=========================> (124 + 64) / 256] [Stage 11:==========================> (126 + 64) / 256] [Stage 11:==========================> (127 + 64) / 256] [Stage 11:==========================> (128 + 64) / 256] [Stage 11:==========================> (129 + 64) / 256] [Stage 11:==========================> (130 + 64) / 256] [Stage 11:===========================> (131 + 64) / 256] [Stage 11:===========================> (132 + 64) / 256] [Stage 11:===========================> (133 + 63) / 256] [Stage 11:===========================> (133 + 64) / 256] [Stage 11:===========================> (134 + 64) / 256] [Stage 11:===========================> (135 + 64) / 256] [Stage 11:============================> (136 + 64) / 256] [Stage 11:============================> (139 + 62) / 256] [Stage 11:=============================> (141 + 61) / 256] [Stage 11:=============================> (142 + 61) / 256] [Stage 11:=============================> (143 + 61) / 256] [Stage 11:=============================> (144 + 60) / 256] [Stage 11:==============================> (145 + 60) / 256] [Stage 11:==============================> (146 + 59) / 256] [Stage 11:==============================> (147 + 58) / 256] [Stage 11:==============================> (148 + 57) / 256] [Stage 11:===============================> (151 + 56) / 256] [Stage 11:===============================> (154 + 54) / 256] [Stage 11:================================> (157 + 51) / 256] [Stage 11:================================> (158 + 51) / 256] [Stage 11:=================================> (160 + 50) / 256] [Stage 11:=================================> (161 + 49) / 256] [Stage 11:=================================> (162 + 49) / 256] [Stage 11:==================================> (167 + 46) / 256] [Stage 11:==================================> (169 + 45) / 256] [Stage 11:===================================> (170 + 44) / 256] [Stage 11:===================================> (171 + 44) / 256] [Stage 11:===================================> (172 + 44) / 256] [Stage 11:====================================> (174 + 42) / 256] [Stage 11:====================================> (176 + 42) / 256] [Stage 11:====================================> (177 + 41) / 256] [Stage 11:====================================> (178 + 40) / 256] [Stage 11:=====================================> (179 + 39) / 256] [Stage 11:=====================================> (180 + 64) / 256] [Stage 11:=====================================> (181 + 64) / 256] [Stage 11:=====================================> (182 + 64) / 256] [Stage 11:=====================================> (183 + 64) / 256] [Stage 11:======================================> (184 + 64) / 256] [Stage 11:======================================> (185 + 64) / 256] [Stage 11:======================================> (186 + 64) / 256] [Stage 11:======================================> (187 + 64) / 256] [Stage 11:======================================> (188 + 64) / 256] [Stage 11:=======================================> (189 + 64) / 256] [Stage 11:=======================================> (190 + 64) / 256] [Stage 11:=======================================> (191 + 64) / 256] [Stage 11:=======================================> (192 + 64) / 256] [Stage 11:=======================================> (193 + 63) / 256] [Stage 11:========================================> (194 + 62) / 256] [Stage 11:========================================> (195 + 61) / 256] [Stage 11:========================================> (196 + 60) / 256] [Stage 11:========================================> (197 + 59) / 256] [Stage 11:========================================> (198 + 58) / 256] [Stage 11:=========================================> (199 + 57) / 256] [Stage 11:=========================================> (200 + 56) / 256] [Stage 11:=========================================> (202 + 54) / 256] [Stage 11:==========================================> (203 + 53) / 256] [Stage 11:==========================================> (204 + 52) / 256] [Stage 11:==========================================> (205 + 51) / 256] [Stage 11:==========================================> (206 + 50) / 256] [Stage 11:==========================================> (207 + 49) / 256] [Stage 11:===========================================> (208 + 48) / 256] [Stage 11:===========================================> (209 + 47) / 256] [Stage 11:===========================================> (210 + 46) / 256] [Stage 11:===========================================> (211 + 45) / 256] [Stage 11:===========================================> (212 + 44) / 256] [Stage 11:============================================> (214 + 42) / 256] [Stage 11:============================================> (215 + 41) / 256] [Stage 11:============================================> (216 + 40) / 256] [Stage 11:============================================> (217 + 39) / 256] [Stage 11:=============================================> (218 + 38) / 256] [Stage 11:=============================================> (219 + 37) / 256] [Stage 11:=============================================> (220 + 36) / 256] [Stage 11:=============================================> (221 + 35) / 256] [Stage 11:=============================================> (222 + 34) / 256] [Stage 11:==============================================> (223 + 33) / 256]12:50:21,902 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 215.0 in stage 11.0 (TID 2855, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1445 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:==============================================> (223 + 33) / 256]12:50:52,248 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 252.0 in stage 11.0 (TID 2876, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1231 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 12:51:02,502 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 248.0 in stage 11.0 (TID 2872, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1241 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 12:51:14,930 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 244.0 in stage 11.0 (TID 2868, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1536 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:==============================================> (224 + 32) / 256] [Stage 11:==============================================> (225 + 31) / 256] [Stage 11:==============================================> (226 + 30) / 256] [Stage 11:==============================================> (227 + 29) / 256]12:51:27,015 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 213.0 in stage 11.0 (TID 2853, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1525 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:===============================================> (228 + 28) / 256] [Stage 11:===============================================> (229 + 27) / 256] [Stage 11:===============================================> (230 + 26) / 256] [Stage 11:===============================================> (231 + 25) / 256]12:51:49,526 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 214.0 in stage 11.0 (TID 2854, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1457 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:================================================> (232 + 24) / 256] [Stage 11:================================================> (233 + 23) / 256] [Stage 11:================================================> (235 + 21) / 256] [Stage 11:================================================> (236 + 20) / 256]12:51:59,553 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 219.0 in stage 11.0 (TID 2859, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1554 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (237 + 19) / 256] [Stage 11:=================================================> (238 + 18) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (239 + 17) / 256]13:01:40,425 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 207.0 in stage 11.0 (TID 2767, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1582 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:01:57,697 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 213.1 in stage 11.0 (TID 2904, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1587 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (239 + 17) / 256]13:02:20,251 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 214.1 in stage 11.0 (TID 2908, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1185 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:02:21,743 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 216.0 in stage 11.0 (TID 2856, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1648 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (239 + 16) / 256] [Stage 11:=================================================> (239 + 17) / 256]13:02:30,361 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 219.1 in stage 11.0 (TID 2909, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1479 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (239 + 17) / 256]13:03:27,522 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 221.1 in stage 11.0 (TID 2881, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1529 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:04:10,609 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 207.1 in stage 11.0 (TID 2910, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1650 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (239 + 17) / 256]13:04:27,852 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 213.2 in stage 11.0 (TID 2911, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1654 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:04:37,465 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 212.2 in stage 11.0 (TID 2914, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1206 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:04:50,459 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 214.2 in stage 11.0 (TID 2915, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1493 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave12 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:04:51,870 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 216.1 in stage 11.0 (TID 2916, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1210 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:04:57,809 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 211.2 in stage 11.0 (TID 2918, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1669 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (239 + 17) / 256] [Stage 11:=================================================> (240 + 16) / 256]13:05:57,666 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.2 in stage 11.0 (TID 2920, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1540 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:05:57,766 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 209.2 in stage 11.0 (TID 2921, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1666 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:05:57,861 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 220.2 in stage 11.0 (TID 2924, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1695 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:06:40,741 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 207.2 in stage 11.0 (TID 2927, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1724 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 11:=================================================> (240 + 16) / 256]13:06:58,032 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 213.3 in stage 11.0 (TID 2928, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1728 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:06:58,041 ERROR [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Task 213 in stage 11.0 failed 4 times; aborting job [Stage 11:=================================================> (240 + 15) / 256]13:06:58,075 ERROR [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) executeMapReduceTask() returned an exception org.apache.spark.SparkException: Job aborted due to stage failure: Task 213 in stage 11.0 failed 4 times, most recent failure: Lost task 213.3 in stage 11.0 (TID 2928, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1728 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.RDD.collect(RDD.scala:904) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:686) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:685) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:685) at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:646) at org.radargun.service.SparkMapReduce$MapToPairReduceByKeyTask.execute(SparkMapReduce.java:197) at org.radargun.stages.mapreduce.MapReduceStage.executeMapReduceTask(MapReduceStage.java:324) at org.radargun.stages.mapreduce.MapReduceStage.executeOnSlave(MapReduceStage.java:213) at org.radargun.SlaveBase.scenarioLoop(SlaveBase.java:87) at org.radargun.SlaveBase$ScenarioRunner.run(SlaveBase.java:151) Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1728 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:06:58,096 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 13:06:58,096 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 5: Got the same results for two Map/Reduce runs 13:06:58,122 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 13:06:58,122 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 13:06:58,123 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 13:06:58,130 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 13:06:58,133 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 13:06:58,134 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 13:06:58,135 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 13:06:58,136 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 13:06:58,137 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 13:06:58,143 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.28:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 13:> (0 + 51) / 256]13:07:07,577 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 212.3 in stage 11.0 (TID 2931, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1485 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:> (0 + 52) / 256] [Stage 13:> (1 + 52) / 256] [Stage 13:> (2 + 52) / 256] [Stage 13:> (4 + 52) / 256] [Stage 13:=> (5 + 53) / 256] [Stage 13:=> (7 + 52) / 256] [Stage 13:==> (11 + 52) / 256] [Stage 13:===> (15 + 52) / 256]13:07:20,679 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 214.3 in stage 11.0 (TID 2932, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1421 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave12 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===> (15 + 53) / 256] [Stage 13:===> (16 + 53) / 256] [Stage 13:===> (17 + 53) / 256] [Stage 13:===> (18 + 53) / 256] [Stage 13:====> (19 + 53) / 256] [Stage 13:====> (20 + 53) / 256] [Stage 13:====> (21 + 53) / 256]13:07:22,206 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 216.2 in stage 11.0 (TID 2933, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1507 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:====> (22 + 54) / 256] [Stage 13:====> (23 + 55) / 256] [Stage 13:=====> (24 + 55) / 256] [Stage 13:=====> (25 + 55) / 256] [Stage 13:=====> (25 + 56) / 256] [Stage 13:=====> (27 + 55) / 256] [Stage 13:=====> (28 + 55) / 256] [Stage 13:======> (29 + 55) / 256] [Stage 13:======> (30 + 55) / 256] [Stage 13:======> (32 + 55) / 256] [Stage 13:======> (33 + 55) / 256] [Stage 13:=======> (35 + 55) / 256]13:07:28,053 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 211.3 in stage 11.0 (TID 2935, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1743 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:=======> (36 + 56) / 256] [Stage 13:========> (38 + 56) / 256] [Stage 13:========> (39 + 56) / 256] [Stage 13:========> (41 + 56) / 256] [Stage 13:=========> (43 + 56) / 256]13:07:37,995 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 219.3 in stage 11.0 (TID 2936, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1449 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:=========> (43 + 57) / 256] [Stage 13:=========> (44 + 57) / 256] [Stage 13:=========> (46 + 57) / 256] [Stage 13:=========> (47 + 57) / 256] [Stage 13:==========> (50 + 57) / 256] [Stage 13:==========> (51 + 57) / 256] [Stage 13:===========> (53 + 57) / 256] [Stage 13:===========> (54 + 57) / 256] [Stage 13:===========> (56 + 57) / 256] [Stage 13:============> (58 + 57) / 256] [Stage 13:============> (59 + 57) / 256] [Stage 13:=============> (62 + 57) / 256] [Stage 13:=============> (64 + 57) / 256] [Stage 13:=============> (65 + 57) / 256] [Stage 13:==============> (68 + 57) / 256] [Stage 13:==============> (69 + 57) / 256] [Stage 13:==============> (70 + 57) / 256] [Stage 13:===============> (72 + 57) / 256] [Stage 13:===============> (73 + 57) / 256] [Stage 13:===============> (74 + 57) / 256] [Stage 13:===============> (75 + 57) / 256] [Stage 13:================> (76 + 57) / 256] [Stage 13:================> (77 + 57) / 256] [Stage 13:================> (78 + 57) / 256] [Stage 13:================> (79 + 57) / 256] [Stage 13:================> (80 + 57) / 256] [Stage 13:=================> (81 + 57) / 256] [Stage 13:=================> (83 + 57) / 256] [Stage 13:=================> (84 + 57) / 256] [Stage 13:=================> (85 + 57) / 256] [Stage 13:==================> (86 + 57) / 256] [Stage 13:==================> (87 + 57) / 256] [Stage 13:==================> (88 + 57) / 256] [Stage 13:==================> (90 + 57) / 256] [Stage 13:===================> (91 + 57) / 256] [Stage 13:===================> (92 + 57) / 256] [Stage 13:===================> (94 + 57) / 256] [Stage 13:====================> (95 + 57) / 256] [Stage 13:====================> (96 + 57) / 256] [Stage 13:====================> (97 + 57) / 256] [Stage 13:====================> (99 + 57) / 256] [Stage 13:=====================> (102 + 57) / 256] [Stage 13:=====================> (103 + 57) / 256] [Stage 13:=====================> (104 + 57) / 256] [Stage 13:=====================> (106 + 57) / 256] [Stage 13:======================> (107 + 57) / 256] [Stage 13:======================> (111 + 57) / 256] [Stage 13:=======================> (112 + 57) / 256] [Stage 13:=======================> (115 + 57) / 256] [Stage 13:========================> (116 + 57) / 256] [Stage 13:========================> (117 + 57) / 256] [Stage 13:========================> (118 + 57) / 256] [Stage 13:========================> (119 + 57) / 256] [Stage 13:========================> (120 + 57) / 256] [Stage 13:=========================> (121 + 57) / 256] [Stage 13:=========================> (122 + 57) / 256] [Stage 13:=========================> (123 + 57) / 256] [Stage 13:=========================> (124 + 57) / 256] [Stage 13:=========================> (125 + 57) / 256] [Stage 13:==========================> (126 + 57) / 256] [Stage 13:==========================> (127 + 56) / 256] [Stage 13:==========================> (128 + 56) / 256] [Stage 13:==========================> (129 + 56) / 256] [Stage 13:===========================> (131 + 55) / 256] [Stage 13:===========================> (133 + 54) / 256] [Stage 13:===========================> (134 + 53) / 256] [Stage 13:===========================> (135 + 53) / 256] [Stage 13:============================> (137 + 53) / 256] [Stage 13:============================> (138 + 53) / 256] [Stage 13:============================> (139 + 52) / 256] [Stage 13:=============================> (141 + 50) / 256] [Stage 13:=============================> (142 + 49) / 256] [Stage 13:=============================> (143 + 49) / 256] [Stage 13:==============================> (145 + 49) / 256] [Stage 13:==============================> (146 + 48) / 256] [Stage 13:==============================> (148 + 47) / 256] [Stage 13:==============================> (149 + 46) / 256] [Stage 13:===============================> (151 + 45) / 256] [Stage 13:===============================> (153 + 44) / 256] [Stage 13:================================> (157 + 43) / 256] [Stage 13:=================================> (161 + 41) / 256] [Stage 13:=================================> (163 + 39) / 256] [Stage 13:=================================> (164 + 38) / 256] [Stage 13:==================================> (165 + 37) / 256] [Stage 13:==================================> (166 + 36) / 256] [Stage 13:==================================> (167 + 35) / 256] [Stage 13:==================================> (168 + 34) / 256] [Stage 13:==================================> (168 + 57) / 256]13:08:27,858 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.3 in stage 11.0 (TID 2937, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1757 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:==================================> (168 + 58) / 256]13:08:28,047 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 209.3 in stage 11.0 (TID 2938, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1397 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===================================> (170 + 57) / 256] [Stage 13:===================================> (170 + 58) / 256] [Stage 13:===================================> (171 + 57) / 256]13:08:32,618 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 220.3 in stage 11.0 (TID 2939, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1768 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===================================> (172 + 57) / 256] [Stage 13:===================================> (173 + 56) / 256] [Stage 13:===================================> (173 + 57) / 256] [Stage 13:====================================> (174 + 56) / 256] [Stage 13:====================================> (175 + 56) / 256] [Stage 13:====================================> (176 + 56) / 256] [Stage 13:====================================> (177 + 55) / 256] [Stage 13:====================================> (178 + 54) / 256] [Stage 13:=====================================> (179 + 53) / 256] [Stage 13:=====================================> (179 + 54) / 256] [Stage 13:=====================================> (180 + 53) / 256] [Stage 13:=====================================> (180 + 63) / 256] [Stage 13:=====================================> (180 + 64) / 256] [Stage 13:=====================================> (182 + 63) / 256] [Stage 13:=====================================> (183 + 63) / 256] [Stage 13:======================================> (184 + 63) / 256] [Stage 13:======================================> (185 + 63) / 256] [Stage 13:======================================> (186 + 63) / 256] [Stage 13:======================================> (188 + 61) / 256] [Stage 13:=======================================> (189 + 60) / 256] [Stage 13:=======================================> (189 + 63) / 256] [Stage 13:=======================================> (190 + 63) / 256] [Stage 13:=======================================> (191 + 63) / 256] [Stage 13:=======================================> (192 + 63) / 256] [Stage 13:========================================> (194 + 62) / 256] [Stage 13:========================================> (195 + 61) / 256] [Stage 13:========================================> (197 + 59) / 256] [Stage 13:========================================> (198 + 58) / 256] [Stage 13:=========================================> (199 + 57) / 256] [Stage 13:=========================================> (200 + 56) / 256] [Stage 13:=========================================> (201 + 55) / 256]13:09:10,928 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 207.3 in stage 11.0 (TID 2943, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1796 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:=========================================> (202 + 54) / 256]13:09:28,465 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 192.0 in stage 13.0 (TID 2950, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1800 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:09:33,958 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 74.0 in stage 13.0 (TID 3097, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1529 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:09:59,462 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 193.0 in stage 13.0 (TID 3034, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1809 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:10:07,035 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 79.0 in stage 13.0 (TID 3129, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1533 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:=========================================> (202 + 54) / 256] [Stage 13:=========================================> (202 + 54) / 256]13:11:37,069 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 194.0 in stage 13.0 (TID 3166, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1968 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:11:37,110 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 196.0 in stage 13.0 (TID 3168, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1970 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:11:58,092 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 197.0 in stage 13.0 (TID 3169, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1835 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:07,197 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 201.0 in stage 13.0 (TID 3176, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1848 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:10,243 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 208.0 in stage 13.0 (TID 3183, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1518 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:10,263 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 205.0 in stage 13.0 (TID 3180, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1488 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:10,285 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 209.0 in stage 13.0 (TID 3184, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1494 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:=========================================> (202 + 50) / 256] [Stage 13:=========================================> (202 + 48) / 256] [Stage 13:=========================================> (202 + 47) / 256] [Stage 13:=========================================> (202 + 54) / 256]13:12:17,171 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 198.0 in stage 13.0 (TID 3171, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1840 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:20,296 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 220.0 in stage 13.0 (TID 3196, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1867 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:21,708 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.0 in stage 13.0 (TID 3197, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1809 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:24,221 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 204.0 in stage 13.0 (TID 3179, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1523 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:35,999 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 222.0 in stage 13.0 (TID 3198, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1996 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:12:37,199 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 223.0 in stage 13.0 (TID 3199, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1811 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:13:01,131 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 192.1 in stage 13.0 (TID 3200, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1869 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:=========================================> (202 + 54) / 256] [Stage 13:==========================================> (203 + 53) / 256] [Stage 13:==========================================> (203 + 53) / 256] [Stage 13:==========================================> (204 + 52) / 256]13:15:07,346 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 195.1 in stage 13.0 (TID 3208, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1770 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:==========================================> (205 + 51) / 256]13:15:28,320 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 197.1 in stage 13.0 (TID 3210, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1906 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:==========================================> (206 + 50) / 256]13:15:40,533 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 210.1 in stage 13.0 (TID 3213, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1773 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:15:40,843 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 209.1 in stage 13.0 (TID 3215, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1920 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:15:44,205 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 200.1 in stage 13.0 (TID 3219, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2039 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:15:44,238 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 207.1 in stage 13.0 (TID 3220, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1575 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:15:44,381 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 212.1 in stage 13.0 (TID 3218, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1777 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:==========================================> (207 + 49) / 256]13:15:54,850 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 221.1 in stage 13.0 (TID 3231, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1548 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:15:56,983 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 199.1 in stage 13.0 (TID 3214, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1936 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:15:58,572 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 82.0 in stage 13.0 (TID 3152, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1870 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:16:04,865 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 204.1 in stage 13.0 (TID 3232, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1781 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:16:07,292 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 75.1 in stage 13.0 (TID 3202, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1667 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:16:10,525 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 223.1 in stage 13.0 (TID 3236, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1878 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:16:12,859 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 198.1 in stage 13.0 (TID 3226, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1922 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===========================================> (208 + 48) / 256] [Stage 13:===========================================> (208 + 48) / 256]13:17:31,540 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 76.0 in stage 13.0 (TID 3100, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1692 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:17:37,634 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 79.1 in stage 13.0 (TID 3205, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1696 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:18:07,332 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 95.0 in stage 13.0 (TID 3165, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1957 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:18:07,410 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 77.1 in stage 13.0 (TID 3203, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1706 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===========================================> (208 + 48) / 256]13:18:31,479 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 82.1 in stage 13.0 (TID 3258, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1727 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===========================================> (209 + 47) / 256] [Stage 13:===========================================> (210 + 46) / 256]13:18:37,374 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 89.1 in stage 13.0 (TID 3262, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1973 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:===========================================> (211 + 45) / 256] [Stage 13:===========================================> (212 + 44) / 256]13:18:43,914 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 88.1 in stage 13.0 (TID 3267, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1629 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:18:49,790 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 81.1 in stage 13.0 (TID 3264, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1636 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:18:58,555 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 197.2 in stage 13.0 (TID 3239, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1981 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:10,813 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 210.2 in stage 13.0 (TID 3240, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1989 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:14,401 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 200.2 in stage 13.0 (TID 3243, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1594 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:14,453 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 207.2 in stage 13.0 (TID 3244, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1635 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:14,625 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 212.2 in stage 13.0 (TID 3247, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1829 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:14,711 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 215.2 in stage 13.0 (TID 3249, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1831 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:25,075 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 221.2 in stage 13.0 (TID 3255, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1920 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:27,212 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 199.2 in stage 13.0 (TID 3256, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1995 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:28,655 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 218.2 in stage 13.0 (TID 3257, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1835 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:============================================> (213 + 43) / 256]13:19:40,826 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 223.2 in stage 13.0 (TID 3269, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1839 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:41,801 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 202.2 in stage 13.0 (TID 3270, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1603 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:19:51,518 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 204.2 in stage 13.0 (TID 3260, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1922 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:20:04,767 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 198.2 in stage 13.0 (TID 3271, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2006 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:============================================> (213 + 43) / 256]13:20:37,500 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 83.1 in stage 13.0 (TID 3282, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1934 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:20:40,056 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 95.1 in stage 13.0 (TID 3279, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2003 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:============================================> (214 + 42) / 256]13:21:01,736 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 76.1 in stage 13.0 (TID 3273, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1782 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 13:============================================> (215 + 41) / 256]13:21:07,587 ERROR [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Task 75 in stage 13.0 failed 4 times; aborting job 13:21:07,593 ERROR [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) executeMapReduceTask() returned an exception org.apache.spark.SparkException: Job aborted due to stage failure: Task 75 in stage 13.0 failed 4 times, most recent failure: Lost task 75.3 in stage 13.0 (TID 3288, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1787 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.RDD.collect(RDD.scala:904) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:686) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:685) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:685) at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:646) at org.radargun.service.SparkMapReduce$MapToPairReduceByKeyTask.execute(SparkMapReduce.java:197) at org.radargun.stages.mapreduce.MapReduceStage.executeMapReduceTask(MapReduceStage.java:324) at org.radargun.stages.mapreduce.MapReduceStage.executeOnSlave(MapReduceStage.java:213) at org.radargun.SlaveBase.scenarioLoop(SlaveBase.java:87) at org.radargun.SlaveBase$ScenarioRunner.run(SlaveBase.java:151) Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1787 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:21:07,595 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 13:21:07,596 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 6: Got the same results for two Map/Reduce runs 13:21:07,626 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 13:21:07,626 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 13:21:07,628 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 13:21:07,635 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 13:21:07,637 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 13:21:07,638 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 13:21:07,639 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 13:21:07,639 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 13:21:07,641 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 13:21:09,883 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 81.2 in stage 13.0 (TID 3291, 172.18.1.30): TaskKilled (killed intentionally) 13:21:37,534 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 90.1 in stage 13.0 (TID 3280, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1631 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:22:07,357 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 80.1 in stage 13.0 (TID 3274, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1881 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:22:07,656 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.26:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 15:> (0 + 33) / 256] [Stage 15:> (1 + 33) / 256] [Stage 15:> (2 + 33) / 256] [Stage 15:> (3 + 33) / 256] [Stage 15:> (4 + 33) / 256] [Stage 15:=> (5 + 33) / 256] [Stage 15:=> (6 + 33) / 256] [Stage 15:=> (8 + 33) / 256] [Stage 15:==> (10 + 33) / 256]13:22:28,758 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 197.3 in stage 13.0 (TID 3292, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2051 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==> (10 + 34) / 256] [Stage 15:==> (11 + 34) / 256] [Stage 15:==> (12 + 34) / 256] [Stage 15:==> (13 + 34) / 256] [Stage 15:==> (14 + 34) / 256] [Stage 15:===> (15 + 34) / 256] [Stage 15:===> (16 + 34) / 256] [Stage 15:===> (17 + 34) / 256] [Stage 15:===> (18 + 34) / 256] [Stage 15:====> (19 + 34) / 256] [Stage 15:====> (21 + 34) / 256] [Stage 15:====> (22 + 34) / 256] [Stage 15:====> (23 + 34) / 256] [Stage 15:=====> (25 + 34) / 256]13:22:41,047 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 210.3 in stage 13.0 (TID 3293, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2061 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=====> (25 + 36) / 256] [Stage 15:=====> (26 + 36) / 256] [Stage 15:=====> (27 + 36) / 256] [Stage 15:=====> (28 + 36) / 256]13:22:44,844 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 200.3 in stage 13.0 (TID 3295, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2178 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:22:44,850 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 207.3 in stage 13.0 (TID 3296, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2180 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:22:44,852 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 212.3 in stage 13.0 (TID 3300, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2044 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:22:45,004 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 215.3 in stage 13.0 (TID 3301, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1906 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=====> (28 + 44) / 256] [Stage 15:======> (29 + 44) / 256] [Stage 15:======> (30 + 44) / 256] [Stage 15:======> (31 + 44) / 256] [Stage 15:======> (32 + 44) / 256] [Stage 15:======> (33 + 45) / 256] [Stage 15:=======> (34 + 45) / 256] [Stage 15:=======> (35 + 45) / 256] [Stage 15:=======> (37 + 45) / 256] [Stage 15:========> (38 + 45) / 256] [Stage 15:========> (39 + 46) / 256] [Stage 15:========> (40 + 46) / 256] [Stage 15:========> (41 + 46) / 256] [Stage 15:========> (42 + 46) / 256] [Stage 15:========> (42 + 47) / 256]13:22:55,980 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 213.3 in stage 13.0 (TID 3306, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1682 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:22:55,982 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.3 in stage 13.0 (TID 3307, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2198 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:========> (42 + 49) / 256] [Stage 15:=========> (43 + 49) / 256] [Stage 15:=========> (45 + 49) / 256]13:22:57,718 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 199.3 in stage 13.0 (TID 3308, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2067 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========> (45 + 50) / 256] [Stage 15:=========> (46 + 50) / 256] [Stage 15:=========> (46 + 51) / 256] [Stage 15:=========> (47 + 51) / 256] [Stage 15:==========> (48 + 51) / 256] [Stage 15:==========> (49 + 51) / 256] [Stage 15:==========> (50 + 51) / 256] [Stage 15:==========> (51 + 51) / 256] [Stage 15:==========> (52 + 51) / 256] [Stage 15:===========> (53 + 52) / 256] [Stage 15:===========> (54 + 52) / 256] [Stage 15:===========> (55 + 52) / 256] [Stage 15:===========> (56 + 52) / 256] [Stage 15:============> (57 + 52) / 256] [Stage 15:============> (59 + 52) / 256] [Stage 15:============> (60 + 52) / 256] [Stage 15:=============> (62 + 52) / 256]13:23:07,673 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 83.2 in stage 13.0 (TID 3316, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1978 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=============> (63 + 55) / 256] [Stage 15:=============> (64 + 55) / 256] [Stage 15:=============> (66 + 55) / 256] [Stage 15:==============> (67 + 55) / 256] [Stage 15:==============> (69 + 55) / 256]13:23:10,379 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 95.2 in stage 13.0 (TID 3319, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1940 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==============> (69 + 56) / 256]13:23:11,091 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 223.3 in stage 13.0 (TID 3311, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1690 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==============> (69 + 57) / 256] [Stage 15:==============> (70 + 57) / 256] [Stage 15:==============> (71 + 57) / 256] [Stage 15:===============> (72 + 57) / 256] [Stage 15:===============> (73 + 57) / 256]13:23:14,929 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 202.3 in stage 13.0 (TID 3312, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1694 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:===============> (74 + 58) / 256] [Stage 15:================> (76 + 58) / 256] [Stage 15:================> (77 + 58) / 256] [Stage 15:================> (78 + 58) / 256] [Stage 15:================> (79 + 58) / 256] [Stage 15:================> (79 + 59) / 256] [Stage 15:================> (80 + 59) / 256] [Stage 15:=================> (81 + 59) / 256] [Stage 15:=================> (83 + 59) / 256] [Stage 15:=================> (84 + 59) / 256] [Stage 15:==================> (86 + 59) / 256] [Stage 15:==================> (87 + 59) / 256] [Stage 15:==================> (88 + 59) / 256] [Stage 15:==================> (88 + 60) / 256] [Stage 15:==================> (89 + 60) / 256] [Stage 15:==================> (90 + 60) / 256] [Stage 15:===================> (91 + 60) / 256] [Stage 15:===================> (93 + 60) / 256] [Stage 15:===================> (94 + 60) / 256] [Stage 15:====================> (95 + 61) / 256] [Stage 15:====================> (96 + 60) / 256] [Stage 15:====================> (97 + 60) / 256] [Stage 15:====================> (98 + 60) / 256] [Stage 15:====================> (99 + 60) / 256] [Stage 15:====================> (100 + 60) / 256] [Stage 15:====================> (101 + 59) / 256] [Stage 15:=====================> (102 + 59) / 256] [Stage 15:=====================> (104 + 58) / 256] [Stage 15:=====================> (104 + 59) / 256] [Stage 15:=====================> (105 + 58) / 256] [Stage 15:=====================> (106 + 58) / 256] [Stage 15:======================> (108 + 56) / 256] [Stage 15:======================> (110 + 56) / 256] [Stage 15:======================> (111 + 56) / 256] [Stage 15:=======================> (113 + 56) / 256] [Stage 15:=======================> (114 + 56) / 256]13:23:32,095 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 76.2 in stage 13.0 (TID 3320, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1837 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=======================> (114 + 57) / 256] [Stage 15:=======================> (115 + 57) / 256] [Stage 15:========================> (116 + 57) / 256] [Stage 15:========================> (117 + 57) / 256] [Stage 15:========================> (118 + 57) / 256] [Stage 15:========================> (119 + 57) / 256] [Stage 15:========================> (120 + 57) / 256]13:23:35,470 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 198.3 in stage 13.0 (TID 3315, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2084 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:========================> (120 + 58) / 256] [Stage 15:=========================> (121 + 58) / 256]13:23:37,750 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 89.3 in stage 13.0 (TID 3321, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1841 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========================> (121 + 59) / 256] [Stage 15:=========================> (124 + 58) / 256] [Stage 15:=========================> (125 + 58) / 256] [Stage 15:==========================> (126 + 58) / 256] [Stage 15:==========================> (127 + 58) / 256] [Stage 15:==========================> (128 + 57) / 256] [Stage 15:==========================> (129 + 56) / 256] [Stage 15:==========================> (130 + 56) / 256] [Stage 15:===========================> (131 + 55) / 256] [Stage 15:===========================> (132 + 55) / 256] [Stage 15:===========================> (134 + 54) / 256] [Stage 15:===========================> (135 + 54) / 256]13:23:45,060 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 84.2 in stage 13.0 (TID 3322, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1744 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:===========================> (135 + 55) / 256] [Stage 15:============================> (137 + 54) / 256] [Stage 15:============================> (138 + 54) / 256] [Stage 15:============================> (139 + 53) / 256] [Stage 15:============================> (140 + 53) / 256] [Stage 15:=============================> (141 + 53) / 256] [Stage 15:=============================> (142 + 52) / 256] [Stage 15:=============================> (144 + 52) / 256] [Stage 15:==============================> (146 + 51) / 256] [Stage 15:==============================> (147 + 51) / 256] [Stage 15:==============================> (148 + 51) / 256] [Stage 15:===============================> (151 + 48) / 256] [Stage 15:===============================> (154 + 46) / 256] [Stage 15:================================> (155 + 46) / 256] [Stage 15:================================> (156 + 46) / 256] [Stage 15:================================> (157 + 46) / 256] [Stage 15:================================> (158 + 45) / 256] [Stage 15:================================> (159 + 44) / 256] [Stage 15:=================================> (160 + 44) / 256] [Stage 15:=================================> (161 + 43) / 256] [Stage 15:=================================> (162 + 42) / 256] [Stage 15:=================================> (163 + 41) / 256] [Stage 15:=================================> (164 + 40) / 256] [Stage 15:==================================> (165 + 40) / 256] [Stage 15:==================================> (166 + 40) / 256] [Stage 15:==================================> (167 + 40) / 256] [Stage 15:==================================> (169 + 39) / 256] [Stage 15:==================================> (169 + 64) / 256] [Stage 15:===================================> (170 + 64) / 256] [Stage 15:===================================> (171 + 64) / 256] [Stage 15:===================================> (172 + 64) / 256] [Stage 15:===================================> (173 + 64) / 256] [Stage 15:====================================> (175 + 64) / 256] [Stage 15:====================================> (176 + 64) / 256] [Stage 15:====================================> (178 + 64) / 256] [Stage 15:=====================================> (179 + 64) / 256] [Stage 15:=====================================> (180 + 64) / 256] [Stage 15:=====================================> (181 + 64) / 256] [Stage 15:=====================================> (182 + 64) / 256] [Stage 15:=====================================> (183 + 64) / 256] [Stage 15:======================================> (184 + 64) / 256] [Stage 15:======================================> (185 + 64) / 256] [Stage 15:======================================> (186 + 64) / 256] [Stage 15:======================================> (187 + 64) / 256] [Stage 15:======================================> (188 + 64) / 256] [Stage 15:=======================================> (189 + 64) / 256] [Stage 15:=======================================> (190 + 64) / 256] [Stage 15:=======================================> (191 + 64) / 256] [Stage 15:=======================================> (192 + 64) / 256]13:24:37,865 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 67.0 in stage 15.0 (TID 3346, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1883 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:24:37,932 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 64.0 in stage 15.0 (TID 3326, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1876 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=======================================> (193 + 63) / 256] [Stage 15:========================================> (194 + 62) / 256] [Stage 15:========================================> (195 + 61) / 256] [Stage 15:========================================> (196 + 60) / 256]13:25:45,211 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 192.0 in stage 15.0 (TID 3323, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2127 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:========================================> (197 + 59) / 256] [Stage 15:========================================> (198 + 58) / 256] [Stage 15:=========================================> (199 + 57) / 256]13:26:11,229 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 195.0 in stage 15.0 (TID 3382, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2129 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========================================> (200 + 56) / 256]13:26:24,175 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 197.0 in stage 15.0 (TID 3411, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2139 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:25,365 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 71.0 in stage 15.0 (TID 3502, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1925 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:30,344 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 198.0 in stage 15.0 (TID 3417, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2148 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:35,208 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 76.0 in stage 15.0 (TID 3535, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2464 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:35,213 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 84.0 in stage 15.0 (TID 3543, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2296 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:35,259 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 77.0 in stage 15.0 (TID 3536, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1936 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:46,581 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 94.0 in stage 15.0 (TID 3553, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2486 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:48,214 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 75.0 in stage 15.0 (TID 3534, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2308 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:55,648 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 88.0 in stage 15.0 (TID 3547, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2468 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:26:55,799 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 78.0 in stage 15.0 (TID 3537, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2299 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:01,820 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 80.0 in stage 15.0 (TID 3539, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1930 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:05,146 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 81.0 in stage 15.0 (TID 3540, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2323 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:08,108 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 64.1 in stage 15.0 (TID 3580, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1946 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:18,428 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 82.0 in stage 15.0 (TID 3541, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2495 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:20,912 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 67.1 in stage 15.0 (TID 3579, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1950 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========================================> (200 + 56) / 256]13:27:35,305 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 200.0 in stage 15.0 (TID 3555, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2326 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:35,609 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 201.0 in stage 15.0 (TID 3556, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2183 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:37,234 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 202.0 in stage 15.0 (TID 3557, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2233 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:40,771 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 208.0 in stage 15.0 (TID 3563, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1944 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:40,935 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 209.0 in stage 15.0 (TID 3564, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1946 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:48,459 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 217.0 in stage 15.0 (TID 3572, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1952 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:27:53,046 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 220.0 in stage 15.0 (TID 3575, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2195 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========================================> (201 + 55) / 256]13:27:55,114 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.0 in stage 15.0 (TID 3576, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2243 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:28:00,171 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 218.0 in stage 15.0 (TID 3573, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1972 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========================================> (201 + 55) / 256]13:29:00,358 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 71.1 in stage 15.0 (TID 3584, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1994 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:05,337 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 91.1 in stage 15.0 (TID 3588, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2347 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:05,397 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 84.1 in stage 15.0 (TID 3589, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1981 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:05,403 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 77.1 in stage 15.0 (TID 3593, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1988 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:=========================================> (202 + 54) / 256]13:29:15,437 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 192.1 in stage 15.0 (TID 3581, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2197 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:18,342 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 75.1 in stage 15.0 (TID 3603, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1998 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:28,004 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 94.1 in stage 15.0 (TID 3602, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2552 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==========================================> (203 + 53) / 256]13:29:33,091 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 95.1 in stage 15.0 (TID 3594, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2018 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:38,254 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 64.2 in stage 15.0 (TID 3610, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2010 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:45,546 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 92.1 in stage 15.0 (TID 3604, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2535 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==========================================> (203 + 52) / 256] [Stage 15:==========================================> (203 + 53) / 256]13:29:48,558 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 82.1 in stage 15.0 (TID 3611, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2010 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:49,900 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 78.1 in stage 15.0 (TID 3606, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2360 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:54,361 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 197.1 in stage 15.0 (TID 3583, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2213 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:29:56,511 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 80.1 in stage 15.0 (TID 3607, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2201 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:30:00,569 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 198.1 in stage 15.0 (TID 3585, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2215 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:30:05,467 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 196.1 in stage 15.0 (TID 3590, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2217 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==========================================> (204 + 52) / 256]13:31:05,830 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 201.1 in stage 15.0 (TID 3614, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2030 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:07,496 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 202.1 in stage 15.0 (TID 3615, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2253 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:11,058 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 208.1 in stage 15.0 (TID 3620, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2257 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:11,097 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 209.1 in stage 15.0 (TID 3622, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2386 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:18,684 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 217.1 in stage 15.0 (TID 3629, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1999 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:23,265 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 220.1 in stage 15.0 (TID 3630, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2001 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:24,231 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 212.1 in stage 15.0 (TID 3631, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2298 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:25,303 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.1 in stage 15.0 (TID 3632, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2003 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:30,450 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 218.1 in stage 15.0 (TID 3634, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2261 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 15:==========================================> (204 + 52) / 256]13:31:35,201 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 203.1 in stage 15.0 (TID 3616, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2296 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:35,448 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 81.1 in stage 15.0 (TID 3608, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2576 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:36,843 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 204.1 in stage 15.0 (TID 3617, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=1995 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:41,003 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 84.2 in stage 15.0 (TID 3640, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2062 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:43,988 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 91.2 in stage 15.0 (TID 3637, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2328 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:48,493 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 75.2 in stage 15.0 (TID 3649, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2581 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:57,380 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 76.2 in stage 15.0 (TID 3638, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2294 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:31:59,731 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 71.2 in stage 15.0 (TID 3636, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2088 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:32:08,400 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 64.3 in stage 15.0 (TID 3653, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2073 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:32:08,401 ERROR [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Task 64 in stage 15.0 failed 4 times; aborting job 13:32:08,406 ERROR [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) executeMapReduceTask() returned an exception org.apache.spark.SparkException: Job aborted due to stage failure: Task 64 in stage 15.0 failed 4 times, most recent failure: Lost task 64.3 in stage 15.0 (TID 3653, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2073 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.RDD.collect(RDD.scala:904) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:686) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:685) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:685) at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:646) at org.radargun.service.SparkMapReduce$MapToPairReduceByKeyTask.execute(SparkMapReduce.java:197) at org.radargun.stages.mapreduce.MapReduceStage.executeMapReduceTask(MapReduceStage.java:324) at org.radargun.stages.mapreduce.MapReduceStage.executeOnSlave(MapReduceStage.java:213) at org.radargun.SlaveBase.scenarioLoop(SlaveBase.java:87) at org.radargun.SlaveBase$ScenarioRunner.run(SlaveBase.java:151) Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2073 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:32:08,408 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 13:32:08,409 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 7: Got the same results for two Map/Reduce runs 13:32:08,440 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 13:32:08,440 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 13:32:08,441 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 13:32:08,449 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 13:32:08,452 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 13:32:08,453 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 13:32:08,454 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 13:32:08,455 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 13:32:08,456 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 13:32:15,682 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 92.2 in stage 15.0 (TID 3655, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2083 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:32:37,753 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 67.3 in stage 15.0 (TID 3659, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2099 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:32:42,218 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 78.2 in stage 15.0 (TID 3658, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2071 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:32:53,997 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 80.2 in stage 15.0 (TID 3661, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2063 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:33:03,394 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 192.2 in stage 15.0 (TID 3648, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2279 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:33:08,476 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.26:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 17:> (0 + 20) / 256]13:33:11,567 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 195.2 in stage 15.0 (TID 3654, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2274 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:> (0 + 21) / 256]13:33:24,564 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 197.2 in stage 15.0 (TID 3660, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2276 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:> (0 + 22) / 256] [Stage 17:> (1 + 22) / 256] [Stage 17:> (2 + 22) / 256] [Stage 17:> (3 + 22) / 256] [Stage 17:> (4 + 22) / 256] [Stage 17:=> (5 + 22) / 256] [Stage 17:=> (6 + 22) / 256] [Stage 17:=> (8 + 22) / 256] [Stage 17:=> (9 + 22) / 256] [Stage 17:==> (10 + 22) / 256] [Stage 17:==> (11 + 22) / 256] [Stage 17:==> (12 + 22) / 256] [Stage 17:==> (13 + 22) / 256] [Stage 17:==> (14 + 22) / 256] [Stage 17:===> (15 + 22) / 256] [Stage 17:===> (16 + 22) / 256] [Stage 17:===> (17 + 22) / 256] [Stage 17:====> (20 + 22) / 256] [Stage 17:====> (21 + 22) / 256]13:33:52,448 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 196.2 in stage 15.0 (TID 3663, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2300 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:====> (21 + 23) / 256]13:33:52,640 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 198.2 in stage 15.0 (TID 3662, 172.18.1.32): TaskKilled (killed intentionally) [Stage 17:====> (22 + 24) / 256] [Stage 17:====> (23 + 24) / 256] [Stage 17:=====> (24 + 24) / 256] [Stage 17:=====> (25 + 24) / 256] [Stage 17:=====> (26 + 24) / 256] [Stage 17:=====> (27 + 24) / 256] [Stage 17:=====> (28 + 24) / 256] [Stage 17:======> (30 + 24) / 256] [Stage 17:======> (31 + 24) / 256] [Stage 17:======> (32 + 24) / 256] [Stage 17:=======> (34 + 24) / 256] [Stage 17:=======> (35 + 24) / 256] [Stage 17:=======> (36 + 24) / 256]13:34:06,506 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 83.3 in stage 15.0 (TID 3687, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2052 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=======> (36 + 25) / 256] [Stage 17:=======> (37 + 25) / 256] [Stage 17:========> (38 + 25) / 256] [Stage 17:========> (38 + 26) / 256] [Stage 17:========> (38 + 27) / 256] [Stage 17:========> (38 + 28) / 256] [Stage 17:========> (38 + 29) / 256] [Stage 17:========> (39 + 29) / 256] [Stage 17:========> (40 + 29) / 256] [Stage 17:========> (41 + 29) / 256]13:34:14,151 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 91.3 in stage 15.0 (TID 3695, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2337 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:========> (41 + 30) / 256] [Stage 17:========> (42 + 30) / 256] [Stage 17:=========> (43 + 30) / 256] [Stage 17:=========> (44 + 30) / 256] [Stage 17:=========> (45 + 30) / 256] [Stage 17:=========> (46 + 30) / 256]13:34:18,598 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 75.3 in stage 15.0 (TID 3696, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2064 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=========> (46 + 31) / 256] [Stage 17:=========> (47 + 31) / 256] [Stage 17:==========> (48 + 31) / 256] [Stage 17:==========> (49 + 31) / 256] [Stage 17:==========> (50 + 31) / 256] [Stage 17:==========> (51 + 32) / 256] [Stage 17:==========> (52 + 32) / 256] [Stage 17:===========> (53 + 32) / 256]13:34:27,546 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 76.3 in stage 15.0 (TID 3698, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2502 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:===========> (53 + 33) / 256] [Stage 17:===========> (53 + 34) / 256]13:34:28,683 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 81.2 in stage 15.0 (TID 3686, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2470 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:===========> (53 + 35) / 256] [Stage 17:===========> (54 + 35) / 256] [Stage 17:===========> (56 + 35) / 256] [Stage 17:===========> (56 + 36) / 256] [Stage 17:===========> (56 + 37) / 256] [Stage 17:============> (57 + 37) / 256] [Stage 17:============> (58 + 37) / 256] [Stage 17:============> (59 + 37) / 256] [Stage 17:============> (61 + 38) / 256]13:34:31,247 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 72.3 in stage 15.0 (TID 3689, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2118 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:============> (61 + 39) / 256] [Stage 17:=============> (62 + 40) / 256] [Stage 17:=============> (63 + 40) / 256] [Stage 17:=============> (63 + 41) / 256] [Stage 17:=============> (64 + 41) / 256] [Stage 17:=============> (65 + 41) / 256] [Stage 17:=============> (65 + 42) / 256] [Stage 17:=============> (66 + 42) / 256] [Stage 17:==============> (67 + 42) / 256]13:34:38,311 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 202.2 in stage 15.0 (TID 3665, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2311 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:==============> (67 + 43) / 256] [Stage 17:==============> (68 + 43) / 256] [Stage 17:==============> (69 + 43) / 256] [Stage 17:==============> (69 + 44) / 256] [Stage 17:==============> (69 + 46) / 256]13:34:41,291 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 209.2 in stage 15.0 (TID 3670, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2081 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:34:41,322 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 208.2 in stage 15.0 (TID 3669, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2356 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:==============> (69 + 48) / 256]13:34:41,976 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 200.2 in stage 15.0 (TID 3671, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2358 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:==============> (70 + 49) / 256] [Stage 17:==============> (70 + 50) / 256] [Stage 17:==============> (71 + 50) / 256] [Stage 17:===============> (72 + 50) / 256] [Stage 17:===============> (72 + 51) / 256] [Stage 17:===============> (73 + 51) / 256] [Stage 17:===============> (73 + 52) / 256] [Stage 17:===============> (75 + 52) / 256] [Stage 17:================> (76 + 52) / 256] [Stage 17:================> (76 + 53) / 256] [Stage 17:================> (77 + 53) / 256]13:34:49,513 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 216.2 in stage 15.0 (TID 3676, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2536 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:================> (79 + 54) / 256] [Stage 17:=================> (82 + 54) / 256] [Stage 17:=================> (83 + 54) / 256] [Stage 17:=================> (84 + 54) / 256] [Stage 17:=================> (85 + 54) / 256] [Stage 17:==================> (87 + 54) / 256] [Stage 17:==================> (88 + 54) / 256] [Stage 17:==================> (89 + 54) / 256] [Stage 17:==================> (90 + 54) / 256] [Stage 17:==================> (90 + 55) / 256] [Stage 17:===================> (92 + 55) / 256] [Stage 17:===================> (93 + 55) / 256] [Stage 17:===================> (94 + 55) / 256]13:34:56,041 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 220.2 in stage 15.0 (TID 3678, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2374 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:===================> (94 + 56) / 256]13:34:57,555 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 212.2 in stage 15.0 (TID 3679, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2099 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:===================> (94 + 57) / 256] [Stage 17:====================> (95 + 58) / 256]13:34:58,668 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 221.2 in stage 15.0 (TID 3681, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2755 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:====================> (96 + 58) / 256] [Stage 17:====================> (97 + 58) / 256] [Stage 17:====================> (99 + 58) / 256] [Stage 17:=====================> (102 + 57) / 256] [Stage 17:=====================> (103 + 57) / 256]13:35:03,389 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 218.2 in stage 15.0 (TID 3683, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2382 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=====================> (105 + 58) / 256] [Stage 17:=====================> (106 + 58) / 256] [Stage 17:======================> (107 + 57) / 256] [Stage 17:======================> (108 + 56) / 256] [Stage 17:======================> (109 + 56) / 256] [Stage 17:======================> (109 + 57) / 256] [Stage 17:======================> (110 + 57) / 256] [Stage 17:======================> (111 + 57) / 256] [Stage 17:=======================> (112 + 57) / 256] [Stage 17:=======================> (113 + 56) / 256] [Stage 17:=======================> (114 + 56) / 256] [Stage 17:========================> (116 + 56) / 256] [Stage 17:========================> (117 + 55) / 256] [Stage 17:========================> (118 + 54) / 256] [Stage 17:========================> (119 + 54) / 256] [Stage 17:========================> (120 + 53) / 256] [Stage 17:=========================> (121 + 52) / 256] [Stage 17:=========================> (122 + 51) / 256] [Stage 17:=========================> (124 + 50) / 256] [Stage 17:=========================> (125 + 50) / 256]13:35:13,565 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 204.2 in stage 15.0 (TID 3688, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2468 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:==========================> (126 + 50) / 256] [Stage 17:==========================> (127 + 50) / 256] [Stage 17:==========================> (128 + 50) / 256] [Stage 17:==========================> (129 + 49) / 256] [Stage 17:==========================> (130 + 49) / 256] [Stage 17:===========================> (131 + 48) / 256] [Stage 17:===========================> (132 + 48) / 256] [Stage 17:===========================> (133 + 47) / 256] [Stage 17:===========================> (134 + 47) / 256] [Stage 17:===========================> (135 + 47) / 256] [Stage 17:============================> (137 + 47) / 256] [Stage 17:============================> (138 + 47) / 256] [Stage 17:============================> (140 + 46) / 256] [Stage 17:=============================> (141 + 46) / 256] [Stage 17:=============================> (142 + 46) / 256] [Stage 17:=============================> (143 + 46) / 256] [Stage 17:==============================> (145 + 46) / 256] [Stage 17:==============================> (146 + 46) / 256] [Stage 17:==============================> (147 + 46) / 256] [Stage 17:==============================> (149 + 46) / 256] [Stage 17:===============================> (151 + 46) / 256] [Stage 17:===============================> (152 + 46) / 256] [Stage 17:===============================> (153 + 46) / 256] [Stage 17:===============================> (154 + 46) / 256] [Stage 17:================================> (155 + 46) / 256] [Stage 17:================================> (156 + 46) / 256]13:35:33,159 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 203.2 in stage 15.0 (TID 3685, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2175 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:================================> (157 + 46) / 256] [Stage 17:================================> (158 + 45) / 256] [Stage 17:================================> (159 + 44) / 256] [Stage 17:=================================> (160 + 43) / 256] [Stage 17:=================================> (160 + 64) / 256]13:35:38,716 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 66.0 in stage 17.0 (TID 3719, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2160 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:35:38,729 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 64.0 in stage 17.0 (TID 3708, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2163 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=================================> (161 + 63) / 256] [Stage 17:=================================> (162 + 63) / 256] [Stage 17:=================================> (163 + 63) / 256] [Stage 17:=================================> (164 + 63) / 256] [Stage 17:==================================> (165 + 62) / 256] [Stage 17:==================================> (166 + 62) / 256] [Stage 17:==================================> (167 + 62) / 256] [Stage 17:==================================> (168 + 61) / 256] [Stage 17:==================================> (169 + 60) / 256] [Stage 17:===================================> (170 + 59) / 256] [Stage 17:===================================> (171 + 58) / 256] [Stage 17:===================================> (171 + 64) / 256] [Stage 17:===================================> (172 + 64) / 256] [Stage 17:====================================> (174 + 64) / 256] [Stage 17:====================================> (175 + 64) / 256] [Stage 17:====================================> (176 + 64) / 256] [Stage 17:====================================> (177 + 64) / 256] [Stage 17:====================================> (178 + 64) / 256] [Stage 17:=====================================> (179 + 64) / 256] [Stage 17:=====================================> (181 + 64) / 256] [Stage 17:=====================================> (182 + 63) / 256] [Stage 17:======================================> (184 + 61) / 256] [Stage 17:======================================> (185 + 60) / 256] [Stage 17:======================================> (186 + 62) / 256] [Stage 17:======================================> (187 + 62) / 256] [Stage 17:======================================> (188 + 64) / 256] [Stage 17:=======================================> (189 + 64) / 256] [Stage 17:=======================================> (190 + 64) / 256] [Stage 17:=======================================> (191 + 64) / 256] [Stage 17:=======================================> (192 + 64) / 256] [Stage 17:=======================================> (193 + 63) / 256]13:36:42,156 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 193.0 in stage 17.0 (TID 3716, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2358 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:36:46,631 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 67.0 in stage 17.0 (TID 3767, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2215 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:========================================> (195 + 61) / 256]13:36:54,749 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 195.0 in stage 17.0 (TID 3725, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2351 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:========================================> (196 + 60) / 256] [Stage 17:========================================> (197 + 59) / 256] [Stage 17:========================================> (198 + 58) / 256] [Stage 17:=========================================> (199 + 57) / 256] [Stage 17:=========================================> (200 + 56) / 256]13:37:22,834 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 197.0 in stage 17.0 (TID 3749, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2367 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=========================================> (201 + 55) / 256]13:38:06,197 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 198.0 in stage 17.0 (TID 3810, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2382 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:07,176 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 82.0 in stage 17.0 (TID 3917, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2715 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:07,215 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 84.0 in stage 17.0 (TID 3919, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2721 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:07,227 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 85.0 in stage 17.0 (TID 3920, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2911 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:07,233 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 79.0 in stage 17.0 (TID 3914, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2726 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:23,439 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 64.1 in stage 17.0 (TID 3934, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2242 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:29,999 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 92.0 in stage 17.0 (TID 3927, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2725 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:30,056 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 80.0 in stage 17.0 (TID 3915, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2621 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:32,876 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 66.1 in stage 17.0 (TID 3929, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2238 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:33,062 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 78.0 in stage 17.0 (TID 3913, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2370 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:41,575 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 94.0 in stage 17.0 (TID 3935, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2397 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:38:45,672 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 95.0 in stage 17.0 (TID 3936, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2648 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=========================================> (201 + 55) / 256]13:39:15,272 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 200.0 in stage 17.0 (TID 3937, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2402 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:15,276 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 201.0 in stage 17.0 (TID 3938, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2650 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:15,306 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 203.0 in stage 17.0 (TID 3940, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2652 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:17,638 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 208.0 in stage 17.0 (TID 3945, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2408 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:18,329 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 209.0 in stage 17.0 (TID 3946, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2357 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:29,188 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 221.0 in stage 17.0 (TID 3959, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2367 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:33,558 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 222.0 in stage 17.0 (TID 3960, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2626 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:33,829 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 223.0 in stage 17.0 (TID 3961, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2628 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:49,064 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 202.0 in stage 17.0 (TID 3939, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2432 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:39:49,708 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 212.0 in stage 17.0 (TID 3949, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2387 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=========================================> (201 + 55) / 256]13:40:24,968 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 195.1 in stage 17.0 (TID 3964, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2418 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:29,861 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 193.1 in stage 17.0 (TID 3962, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2434 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:37,350 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 83.1 in stage 17.0 (TID 3976, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2427 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:37,384 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 79.1 in stage 17.0 (TID 3979, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2960 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:37,386 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 72.1 in stage 17.0 (TID 3972, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2298 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:37,392 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 81.1 in stage 17.0 (TID 3977, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2963 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:37,464 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 84.1 in stage 17.0 (TID 3970, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2774 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=========================================> (202 + 54) / 256]13:40:52,904 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 74.1 in stage 17.0 (TID 3984, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2792 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:53,195 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 197.1 in stage 17.0 (TID 3965, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2436 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:40:55,632 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 82.1 in stage 17.0 (TID 3968, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2331 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:41:03,213 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 78.1 in stage 17.0 (TID 3990, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2447 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:41:04,268 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 88.1 in stage 17.0 (TID 3969, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2772 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:41:15,828 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 95.1 in stage 17.0 (TID 3992, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2983 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:41:25,954 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 66.2 in stage 17.0 (TID 3989, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2339 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:41:46,913 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 67.2 in stage 17.0 (TID 3999, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2347 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:=========================================> (202 + 54) / 256] [Stage 17:==========================================> (203 + 53) / 256] [Stage 17:==========================================> (204 + 52) / 256]13:42:45,497 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 200.1 in stage 17.0 (TID 3993, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2473 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:42:45,501 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 201.1 in stage 17.0 (TID 3994, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3005 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:42:46,359 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 204.1 in stage 17.0 (TID 3996, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2815 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:42:47,778 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 208.1 in stage 17.0 (TID 4001, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2476 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:42:48,500 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 209.1 in stage 17.0 (TID 4002, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2424 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 17:==========================================> (204 + 52) / 256]13:42:59,408 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 221.1 in stage 17.0 (TID 4013, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2432 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:02,775 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 203.1 in stage 17.0 (TID 3995, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3006 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:04,041 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 223.1 in stage 17.0 (TID 4015, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2692 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:08,959 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 222.1 in stage 17.0 (TID 4014, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2711 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:15,497 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 79.2 in stage 17.0 (TID 4021, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3016 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:15,501 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 85.2 in stage 17.0 (TID 4023, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2487 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:16,353 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 81.2 in stage 17.0 (TID 4024, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2698 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:24,362 ERROR [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Task 64 in stage 17.0 failed 4 times; aborting job 13:43:24,368 ERROR [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) executeMapReduceTask() returned an exception org.apache.spark.SparkException: Job aborted due to stage failure: Task 64 in stage 17.0 failed 4 times, most recent failure: Lost task 64.3 in stage 17.0 (TID 4035, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2378 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.RDD.collect(RDD.scala:904) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:686) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:685) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:685) at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:646) at org.radargun.service.SparkMapReduce$MapToPairReduceByKeyTask.execute(SparkMapReduce.java:197) at org.radargun.stages.mapreduce.MapReduceStage.executeMapReduceTask(MapReduceStage.java:324) at org.radargun.stages.mapreduce.MapReduceStage.executeOnSlave(MapReduceStage.java:213) at org.radargun.SlaveBase.scenarioLoop(SlaveBase.java:87) at org.radargun.SlaveBase$ScenarioRunner.run(SlaveBase.java:151) Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2378 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:24,371 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 13:43:24,371 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 8: Got the same results for two Map/Reduce runs 13:43:24,405 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.18:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] 13:43:24,405 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.28:11222), adding to the pool. 13:43:24,411 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.26:11222), adding to the pool. 13:43:24,419 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.32:11222), adding to the pool. 13:43:24,420 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.30:11222), adding to the pool. 13:43:24,421 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.20:11222), adding to the pool. 13:43:24,422 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.24:11222), adding to the pool. 13:43:24,423 INFO [org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory] (sc-main) ISPN004014: New server added(/172.18.1.22:11222), adding to the pool. 13:43:24,424 INFO [org.infinispan.client.hotrod.RemoteCacheManager] (sc-main) ISPN004021: Infinispan version: 8.1.0.Final 13:43:29,286 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 72.2 in stage 17.0 (TID 4022, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2364 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:33,381 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 78.2 in stage 17.0 (TID 4039, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2836 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:35,918 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 202.1 in stage 17.0 (TID 4016, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2488 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:55,188 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 195.2 in stage 17.0 (TID 4018, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2505 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:56,118 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 80.2 in stage 17.0 (TID 4038, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2710 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:57,876 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 95.2 in stage 17.0 (TID 4043, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2844 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:43:59,978 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 92.2 in stage 17.0 (TID 4041, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2505 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:44:20,976 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 66.3 in stage 17.0 (TID 4044, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2399 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:44:22,983 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 193.2 in stage 17.0 (TID 4019, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2507 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:44:23,352 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 197.2 in stage 17.0 (TID 4034, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2512 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:44:24,443 INFO [org.infinispan.client.hotrod.impl.protocol.Codec21] (sc-main) ISPN004006: /172.18.1.26:11222 sent new topology view (id=14, age=0) containing 8 addresses: [/172.18.1.28:11222, /172.18.1.26:11222, /172.18.1.32:11222, /172.18.1.30:11222, /172.18.1.20:11222, /172.18.1.18:11222, /172.18.1.24:11222, /172.18.1.22:11222] [Stage 19:> (0 + 27) / 256]13:44:27,442 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 67.3 in stage 17.0 (TID 4045, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2420 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:> (0 + 28) / 256] [Stage 19:> (1 + 28) / 256] [Stage 19:> (2 + 28) / 256] [Stage 19:> (4 + 28) / 256] [Stage 19:=> (5 + 28) / 256] [Stage 19:=> (5 + 29) / 256] [Stage 19:=> (7 + 28) / 256] [Stage 19:=> (8 + 28) / 256] [Stage 19:=> (9 + 28) / 256] [Stage 19:==> (10 + 28) / 256] [Stage 19:==> (13 + 28) / 256] [Stage 19:==> (14 + 28) / 256] [Stage 19:===> (15 + 28) / 256] [Stage 19:===> (15 + 29) / 256] [Stage 19:===> (17 + 28) / 256] [Stage 19:===> (18 + 28) / 256] [Stage 19:====> (19 + 28) / 256] [Stage 19:====> (20 + 28) / 256] [Stage 19:====> (21 + 28) / 256] [Stage 19:====> (22 + 28) / 256] [Stage 19:====> (23 + 28) / 256] [Stage 19:=====> (24 + 28) / 256] [Stage 19:=====> (25 + 28) / 256] [Stage 19:=====> (26 + 28) / 256] [Stage 19:=====> (27 + 28) / 256] [Stage 19:=====> (28 + 28) / 256] [Stage 19:======> (29 + 28) / 256] [Stage 19:======> (30 + 28) / 256] [Stage 19:======> (31 + 28) / 256] [Stage 19:======> (32 + 28) / 256] [Stage 19:======> (33 + 28) / 256] [Stage 19:=======> (34 + 28) / 256] [Stage 19:=======> (35 + 28) / 256] [Stage 19:=======> (36 + 28) / 256] [Stage 19:========> (38 + 28) / 256] [Stage 19:========> (39 + 28) / 256] [Stage 19:========> (40 + 28) / 256] [Stage 19:========> (41 + 28) / 256] [Stage 19:========> (42 + 28) / 256] [Stage 19:=========> (43 + 28) / 256] [Stage 19:=========> (44 + 28) / 256] [Stage 19:=========> (45 + 28) / 256] [Stage 19:=========> (46 + 28) / 256] [Stage 19:=========> (47 + 28) / 256] [Stage 19:==========> (48 + 28) / 256] [Stage 19:==========> (49 + 28) / 256] [Stage 19:==========> (50 + 28) / 256] [Stage 19:==========> (51 + 28) / 256] [Stage 19:===========> (53 + 28) / 256] [Stage 19:===========> (55 + 28) / 256] [Stage 19:===========> (56 + 28) / 256] [Stage 19:============> (57 + 28) / 256] [Stage 19:============> (58 + 28) / 256] [Stage 19:============> (59 + 28) / 256] [Stage 19:============> (60 + 28) / 256] [Stage 19:============> (61 + 28) / 256] [Stage 19:=============> (63 + 28) / 256] [Stage 19:=============> (64 + 28) / 256]13:45:37,639 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 83.3 in stage 17.0 (TID 4067, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2923 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=============> (64 + 29) / 256] [Stage 19:=============> (65 + 29) / 256] [Stage 19:=============> (66 + 29) / 256]13:45:42,324 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 84.3 in stage 17.0 (TID 4068, 172.18.1.32): TaskKilled (killed intentionally) [Stage 19:=============> (66 + 30) / 256] [Stage 19:==============> (67 + 30) / 256] [Stage 19:==============> (68 + 30) / 256] [Stage 19:==============> (68 + 31) / 256] [Stage 19:==============> (68 + 33) / 256]13:45:47,122 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 76.3 in stage 17.0 (TID 4075, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2553 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==============> (68 + 34) / 256] [Stage 19:==============> (69 + 34) / 256] [Stage 19:==============> (70 + 34) / 256]13:45:47,949 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 93.3 in stage 17.0 (TID 4076, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2427 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==============> (70 + 35) / 256]13:45:48,130 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 73.3 in stage 17.0 (TID 4078, 172.18.1.30): TaskKilled (killed intentionally) [Stage 19:==============> (70 + 36) / 256] [Stage 19:==============> (70 + 37) / 256]13:45:48,965 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 79.3 in stage 17.0 (TID 4070, 172.18.1.24): TaskKilled (killed intentionally) [Stage 19:==============> (71 + 38) / 256] [Stage 19:===============> (72 + 38) / 256] [Stage 19:===============> (73 + 38) / 256] [Stage 19:===============> (74 + 38) / 256] [Stage 19:===============> (75 + 38) / 256] [Stage 19:================> (76 + 38) / 256] [Stage 19:================> (77 + 38) / 256] [Stage 19:================> (78 + 38) / 256] [Stage 19:================> (80 + 38) / 256] [Stage 19:=================> (81 + 38) / 256] [Stage 19:=================> (82 + 38) / 256]13:45:59,656 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 81.3 in stage 17.0 (TID 4072, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2876 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=================> (83 + 39) / 256] [Stage 19:=================> (85 + 39) / 256] [Stage 19:==================> (86 + 39) / 256] [Stage 19:==================> (88 + 39) / 256] [Stage 19:==================> (89 + 39) / 256]13:46:02,175 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 91.3 in stage 17.0 (TID 4080, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2906 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==================> (89 + 41) / 256] [Stage 19:==================> (90 + 41) / 256] [Stage 19:===================> (91 + 41) / 256] [Stage 19:===================> (92 + 41) / 256] [Stage 19:===================> (93 + 41) / 256] [Stage 19:===================> (94 + 41) / 256] [Stage 19:====================> (96 + 41) / 256] [Stage 19:====================> (97 + 41) / 256] [Stage 19:====================> (98 + 41) / 256] [Stage 19:====================> (99 + 41) / 256] [Stage 19:====================> (101 + 41) / 256] [Stage 19:=====================> (102 + 41) / 256]13:46:15,728 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 200.2 in stage 17.0 (TID 4046, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2553 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:46:15,732 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 201.2 in stage 17.0 (TID 4047, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2554 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=====================> (103 + 43) / 256]13:46:16,577 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 204.2 in stage 17.0 (TID 4048, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2556 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=====================> (103 + 44) / 256] [Stage 19:=====================> (104 + 46) / 256] [Stage 19:=====================> (105 + 47) / 256]13:46:18,001 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 208.2 in stage 17.0 (TID 4052, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2976 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=====================> (105 + 48) / 256]13:46:18,719 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 209.2 in stage 17.0 (TID 4053, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2979 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=====================> (106 + 49) / 256] [Stage 19:======================> (107 + 49) / 256] [Stage 19:======================> (107 + 50) / 256] [Stage 19:======================> (107 + 51) / 256] [Stage 19:======================> (108 + 51) / 256] [Stage 19:======================> (109 + 51) / 256] [Stage 19:======================> (111 + 51) / 256] [Stage 19:=======================> (112 + 51) / 256] [Stage 19:=======================> (113 + 51) / 256] [Stage 19:=======================> (114 + 51) / 256] [Stage 19:=======================> (115 + 51) / 256] [Stage 19:========================> (116 + 51) / 256] [Stage 19:========================> (116 + 54) / 256] [Stage 19:========================> (116 + 58) / 256] [Stage 19:========================> (116 + 59) / 256] [Stage 19:========================> (117 + 59) / 256] [Stage 19:========================> (118 + 59) / 256] [Stage 19:========================> (120 + 59) / 256] [Stage 19:=========================> (121 + 59) / 256] [Stage 19:=========================> (122 + 58) / 256] [Stage 19:=========================> (123 + 58) / 256] [Stage 19:=========================> (124 + 57) / 256] [Stage 19:=========================> (125 + 57) / 256]13:46:29,662 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 221.2 in stage 17.0 (TID 4064, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2616 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================> (125 + 58) / 256] [Stage 19:==========================> (126 + 58) / 256] [Stage 19:==========================> (127 + 57) / 256] [Stage 19:==========================> (128 + 56) / 256] [Stage 19:==========================> (128 + 60) / 256] [Stage 19:==========================> (129 + 60) / 256]13:46:33,729 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 203.2 in stage 17.0 (TID 4065, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3212 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==========================> (129 + 61) / 256]13:46:35,308 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 223.2 in stage 17.0 (TID 4066, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3007 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==========================> (130 + 62) / 256] [Stage 19:===========================> (131 + 62) / 256] [Stage 19:===========================> (132 + 61) / 256] [Stage 19:===========================> (133 + 61) / 256] [Stage 19:===========================> (134 + 60) / 256] [Stage 19:===========================> (135 + 60) / 256] [Stage 19:============================> (137 + 59) / 256] [Stage 19:============================> (138 + 58) / 256] [Stage 19:============================> (139 + 57) / 256] [Stage 19:=============================> (141 + 56) / 256] [Stage 19:=============================> (142 + 55) / 256] [Stage 19:=============================> (143 + 55) / 256] [Stage 19:=============================> (144 + 54) / 256] [Stage 19:==============================> (146 + 53) / 256] [Stage 19:==============================> (147 + 52) / 256]13:46:44,179 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 222.2 in stage 17.0 (TID 4069, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2599 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==============================> (147 + 53) / 256] [Stage 19:==============================> (148 + 53) / 256] [Stage 19:==============================> (149 + 53) / 256] [Stage 19:===============================> (150 + 52) / 256] [Stage 19:===============================> (151 + 52) / 256] [Stage 19:===============================> (152 + 51) / 256] [Stage 19:===============================> (153 + 50) / 256] [Stage 19:===============================> (154 + 50) / 256] [Stage 19:================================> (156 + 49) / 256] [Stage 19:================================> (157 + 49) / 256] [Stage 19:================================> (158 + 48) / 256] [Stage 19:=================================> (160 + 46) / 256] [Stage 19:=================================> (161 + 46) / 256] [Stage 19:=================================> (162 + 45) / 256] [Stage 19:=================================> (163 + 44) / 256] [Stage 19:=================================> (164 + 43) / 256] [Stage 19:==================================> (166 + 41) / 256] [Stage 19:==================================> (167 + 40) / 256] [Stage 19:==================================> (167 + 63) / 256]13:46:54,455 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 212.2 in stage 17.0 (TID 4079, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2928 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==================================> (167 + 64) / 256]13:46:54,693 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 68.0 in stage 19.0 (TID 4108, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2464 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:46:54,694 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 65.0 in stage 19.0 (TID 4095, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2463 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:46:54,722 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 64.0 in stage 19.0 (TID 4087, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2467 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:==================================> (168 + 63) / 256] [Stage 19:==================================> (168 + 64) / 256] [Stage 19:==================================> (169 + 64) / 256] [Stage 19:===================================> (170 + 64) / 256] [Stage 19:===================================> (171 + 64) / 256] [Stage 19:===================================> (172 + 64) / 256] [Stage 19:===================================> (173 + 64) / 256] [Stage 19:====================================> (174 + 64) / 256] [Stage 19:====================================> (175 + 64) / 256] [Stage 19:====================================> (176 + 64) / 256] [Stage 19:====================================> (177 + 64) / 256] [Stage 19:====================================> (178 + 64) / 256] [Stage 19:=====================================> (179 + 64) / 256] [Stage 19:=====================================> (180 + 64) / 256] [Stage 19:=====================================> (181 + 64) / 256] [Stage 19:=====================================> (182 + 64) / 256] [Stage 19:======================================> (185 + 64) / 256] [Stage 19:======================================> (186 + 64) / 256] [Stage 19:======================================> (187 + 64) / 256] [Stage 19:======================================> (188 + 64) / 256] [Stage 19:=======================================> (190 + 64) / 256] [Stage 19:=======================================> (191 + 64) / 256] [Stage 19:=======================================> (192 + 64) / 256] [Stage 19:=======================================> (193 + 63) / 256] [Stage 19:========================================> (194 + 62) / 256] [Stage 19:========================================> (195 + 61) / 256] [Stage 19:========================================> (196 + 60) / 256] [Stage 19:========================================> (197 + 59) / 256] [Stage 19:========================================> (198 + 58) / 256]13:49:12,750 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 195.0 in stage 19.0 (TID 4177, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2614 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (199 + 57) / 256]13:49:21,325 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 196.0 in stage 19.0 (TID 4188, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2617 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:23,223 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 79.0 in stage 19.0 (TID 4296, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3158 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:23,225 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 90.0 in stage 19.0 (TID 4307, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3359 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:23,238 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 75.0 in stage 19.0 (TID 4292, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2761 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:25,983 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 84.0 in stage 19.0 (TID 4301, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3167 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:35,332 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 74.0 in stage 19.0 (TID 4291, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3181 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:36,477 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 81.0 in stage 19.0 (TID 4298, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3012 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:45,319 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 82.0 in stage 19.0 (TID 4299, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3034 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:45,984 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 78.0 in stage 19.0 (TID 4295, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3382 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:50,494 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 92.0 in stage 19.0 (TID 4309, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3161 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:51,325 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 67.1 in stage 19.0 (TID 4317, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2536 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:53,207 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 64.1 in stage 19.0 (TID 4316, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2537 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:49:53,224 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 68.1 in stage 19.0 (TID 4319, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2749 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (199 + 57) / 256]13:50:25,192 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 200.0 in stage 19.0 (TID 4314, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2559 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:50:30,659 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 202.0 in stage 19.0 (TID 4321, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2773 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:50:38,149 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 208.0 in stage 19.0 (TID 4327, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3066 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:50:38,935 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 209.0 in stage 19.0 (TID 4328, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3052 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:50:40,783 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 212.0 in stage 19.0 (TID 4331, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2790 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:50:42,001 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 213.0 in stage 19.0 (TID 4332, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2792 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:50:43,467 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 220.0 in stage 19.0 (TID 4339, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2796 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (200 + 56) / 256]13:51:24,702 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 95.0 in stage 19.0 (TID 4312, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3072 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:114) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:51:53,376 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 85.1 in stage 19.0 (TID 4348, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2589 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:51:53,377 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 90.1 in stage 19.0 (TID 4346, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2590 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:51:53,387 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 79.1 in stage 19.0 (TID 4345, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3415 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:51:53,419 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 72.1 in stage 19.0 (TID 4349, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2665 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (200 + 56) / 256]13:51:56,112 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 84.1 in stage 19.0 (TID 4354, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3221 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:08,348 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 74.1 in stage 19.0 (TID 4358, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3246 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (201 + 55) / 256]13:52:17,460 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 81.1 in stage 19.0 (TID 4359, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3232 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:20,662 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 92.1 in stage 19.0 (TID 4366, 172.18.1.22): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3236 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:23,361 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 64.2 in stage 19.0 (TID 4369, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2608 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:26,076 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 82.1 in stage 19.0 (TID 4363, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3425 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:42,917 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 195.1 in stage 19.0 (TID 4343, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2690 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:46,143 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 78.1 in stage 19.0 (TID 4364, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3082 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:48,029 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 80.1 in stage 19.0 (TID 4365, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2673 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave6 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:49,823 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 88.1 in stage 19.0 (TID 4367, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3103 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:52:51,543 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 196.1 in stage 19.0 (TID 4344, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2692 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (201 + 55) / 256]13:53:23,365 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 197.1 in stage 19.0 (TID 4373, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2701 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:53:23,373 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 198.1 in stage 19.0 (TID 4372, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2703 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:53:23,442 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 199.1 in stage 19.0 (TID 4374, 172.18.1.32): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2705 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:53:54,831 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 95.1 in stage 19.0 (TID 4398, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2838 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [Stage 19:=========================================> (201 + 54) / 256] [Stage 19:=========================================> (201 + 55) / 256]13:53:55,321 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 200.1 in stage 19.0 (TID 4375, 172.18.1.18): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2840 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:00,921 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 202.1 in stage 19.0 (TID 4376, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2645 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:08,325 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 208.1 in stage 19.0 (TID 4382, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3125 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:09,048 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 209.1 in stage 19.0 (TID 4383, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3459 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave3 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:11,955 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 212.1 in stage 19.0 (TID 4386, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3129 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:12,212 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 213.1 in stage 19.0 (TID 4387, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3461 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:13,675 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 220.1 in stage 19.0 (TID 4394, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3137 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:23,525 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 79.2 in stage 19.0 (TID 4402, 172.18.1.24): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3135 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave9 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:24,829 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 72.2 in stage 19.0 (TID 4403, 172.18.1.26): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3141 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave24 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:25,321 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Lost task 90.2 in stage 19.0 (TID 4399, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2656 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:30,923 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 75.2 in stage 19.0 (TID 4401, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2657 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave15 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:35,657 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-3) Lost task 84.2 in stage 19.0 (TID 4405, 172.18.1.30): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=3469 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave21 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:37,101 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-0) Lost task 73.2 in stage 19.0 (TID 4406, 172.18.1.20): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2859 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:48,657 WARN [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-1) Lost task 81.2 in stage 19.0 (TID 4415, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2669 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:51,644 ERROR [org.apache.spark.scheduler.TaskSetManager] (task-result-getter-2) Task 67 in stage 19.0 failed 4 times; aborting job 13:54:51,650 ERROR [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) executeMapReduceTask() returned an exception org.apache.spark.SparkException: Job aborted due to stage failure: Task 67 in stage 19.0 failed 4 times, most recent failure: Lost task 67.3 in stage 19.0 (TID 4417, 172.18.1.28): org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2673 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1848) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1919) at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.RDD.collect(RDD.scala:904) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:686) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$collectAsMap$1.apply(PairRDDFunctions.scala:685) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.collectAsMap(PairRDDFunctions.scala:685) at org.apache.spark.api.java.JavaPairRDD.collectAsMap(JavaPairRDD.scala:646) at org.radargun.service.SparkMapReduce$MapToPairReduceByKeyTask.execute(SparkMapReduce.java:197) at org.radargun.stages.mapreduce.MapReduceStage.executeMapReduceTask(MapReduceStage.java:324) at org.radargun.stages.mapreduce.MapReduceStage.executeOnSlave(MapReduceStage.java:213) at org.radargun.SlaveBase.scenarioLoop(SlaveBase.java:87) at org.radargun.SlaveBase$ScenarioRunner.run(SlaveBase.java:151) Caused by: org.infinispan.client.hotrod.exceptions.HotRodClientException:Request for messageId=2673 returned server error (status=0x86): org.infinispan.util.concurrent.TimeoutException: Replication timeout for slave18 at org.infinispan.client.hotrod.impl.protocol.Codec20.checkForErrorsInResponseStatus(Codec20.java:343) at org.infinispan.client.hotrod.impl.protocol.Codec20.readPartialHeader(Codec20.java:132) at org.infinispan.client.hotrod.impl.protocol.Codec20.readHeader(Codec20.java:118) at org.infinispan.client.hotrod.impl.operations.HotRodOperation.readHeaderAndValidate(HotRodOperation.java:56) at org.infinispan.client.hotrod.impl.operations.IterationNextOperation.execute(IterationNextOperation.java:35) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.fetch(RemoteCloseableIterator.java:91) at org.infinispan.client.hotrod.impl.iteration.RemoteCloseableIterator.hasNext(RemoteCloseableIterator.java:75) at org.infinispan.spark.rdd.InfinispanIterator.hasNext(InfinispanIterator.scala:13) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:209) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 13:54:51,652 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) -------------------- 13:54:51,653 INFO [org.radargun.stages.mapreduce.MapReduceStage] (sc-main) 9: Got the same results for two Map/Reduce runs 13:54:51,653 INFO [org.radargun.Slave] (sc-main) Finished stage MapReduce 13:54:51,661 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 13:54:51,688 INFO [org.radargun.Slave] (sc-main) Starting stage ScenarioDestroy 13:54:51,688 INFO [org.radargun.stages.ScenarioDestroyStage] (sc-main) Scenario finished, destroying... 13:54:51,689 INFO [org.radargun.stages.ScenarioDestroyStage] (sc-main) Memory before cleanup: Runtime free: 18,270,675 kb Runtime max:24,117,248 kb Runtime total:24,117,248 kb MX Code Cache(Non-heap memory): used: 24,524 kb, init: 2,496 kb, committed: 24,960 kb, max: 245,760 kb MX Metaspace(Non-heap memory): used: 59,548 kb, init: 0 kb, committed: 60,760 kb, max: 0 kb MX Compressed Class Space(Non-heap memory): used: 9,091 kb, init: 0 kb, committed: 9,344 kb, max: 1,048,576 kb MX PS Eden Space(Heap memory): used: 5,821,617 kb, init: 6,291,456 kb, committed: 6,291,456 kb, max: 6,291,456 kb MX PS Survivor Space(Heap memory): used: 0 kb, init: 1,048,576 kb, committed: 1,048,576 kb, max: 1,048,576 kb MX PS Old Gen(Heap memory): used: 24,955 kb, init: 16,777,216 kb, committed: 16,777,216 kb, max: 16,777,216 kb 13:54:51,690 INFO [org.radargun.stages.lifecycle.LifecycleHelper] (sc-main) Stopping service. 13:54:51,745 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/metrics/json,null} 13:54:51,747 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null} 13:54:51,747 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/api,null} 13:54:51,748 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/,null} 13:54:51,749 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/static,null} 13:54:51,749 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null} 13:54:51,750 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null} 13:54:51,751 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/executors/json,null} 13:54:51,751 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/executors,null} 13:54:51,752 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/environment/json,null} 13:54:51,753 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/environment,null} 13:54:51,753 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null} 13:54:51,754 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/storage/rdd,null} 13:54:51,755 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/storage/json,null} 13:54:51,755 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/storage,null} 13:54:51,756 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null} 13:54:51,757 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages/pool,null} 13:54:51,757 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null} 13:54:51,758 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages/stage,null} 13:54:51,760 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages/json,null} 13:54:51,760 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/stages,null} 13:54:51,760 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null} 13:54:51,761 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/jobs/job,null} 13:54:51,761 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/jobs/json,null} 13:54:51,761 INFO [org.spark-project.jetty.server.handler.ContextHandler] (StopThread) stopped o.s.j.s.ServletContextHandler{/jobs,null} 13:54:52,044 INFO [org.radargun.stages.ScenarioDestroyStage] (sc-main) Service successfully stopped. 13:54:52,046 INFO [org.radargun.Slave] (sc-main) Finished stage ScenarioDestroy 13:54:52,046 INFO [org.radargun.RemoteMasterConnection] (sc-main) Response successfully sent to the master 13:54:52,057 INFO [akka.remote.RemoteActorRefProvider$RemotingTerminator] (sparkDriver-akka.actor.default-dispatcher-17) Shutting down remote daemon. 13:54:52,064 INFO [akka.remote.RemoteActorRefProvider$RemotingTerminator] (sparkDriver-akka.actor.default-dispatcher-17) Remote daemon shut down; proceeding with flushing remote transports. 13:54:52,132 INFO [akka.remote.RemoteActorRefProvider$RemotingTerminator] (sparkDriver-akka.actor.default-dispatcher-17) Remoting shut down. 13:55:27,568 INFO [org.radargun.Slave] (main) Starting stage ScenarioCleanup 13:55:27,571 WARN [org.radargun.stages.ScenarioCleanupStage] (main) Unfinished thread ForkJoinPool-4-worker-15 (id=106, state=WAITING) at sun.misc.Unsafe.park(Native Method) at scala.concurrent.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 13:55:27,571 INFO [org.radargun.stages.ScenarioCleanupStage] (main) Interrupting thread ForkJoinPool-4-worker-15 (id=106, state=WAITING) 13:55:32,572 INFO [org.radargun.stages.ScenarioCleanupStage] (main) Stopping thread ForkJoinPool-4-worker-15 (id=106, state=WAITING) java.lang.ThreadDeath at java.lang.Thread.stop(Thread.java:850) at org.radargun.stages.ScenarioCleanupStage.stopUnfinished(ScenarioCleanupStage.java:166) at org.radargun.stages.ScenarioCleanupStage.executeOnSlave(ScenarioCleanupStage.java:61) at org.radargun.SlaveBase.runCleanup(SlaveBase.java:127) at org.radargun.Slave.run(Slave.java:52) at org.radargun.Slave.main(Slave.java:67) 13:55:32,574 INFO [org.radargun.stages.ScenarioCleanupStage] (main) Is thread ForkJoinPool-4-worker-15 (id=106, state=TERMINATED)) alive? false 6145.820: [GC (System.gc()) [PSYoungGen: 6223136K->157583K(7340032K)] 6248091K->182554K(24117248K), 0.1225336 secs] [Times: user=0.52 sys=0.19, real=0.12 secs] 6145.943: [Full GC (System.gc()) [PSYoungGen: 157583K->0K(7340032K)] [ParOldGen: 24971K->157226K(16777216K)] 182554K->157226K(24117248K), [Metaspace: 60068K->60011K(1101824K)], 0.3953436 secs] [Times: user=1.45 sys=0.37, real=0.39 secs] 13:55:33,094 INFO [org.radargun.stages.ScenarioCleanupStage] (main) Memory after cleanup: Runtime free: 23,897,106 kb Runtime max:24,117,248 kb Runtime total:24,117,248 kb MX Code Cache(Non-heap memory): used: 24,684 kb, init: 2,496 kb, committed: 24,960 kb, max: 245,760 kb MX Metaspace(Non-heap memory): used: 60,011 kb, init: 0 kb, committed: 61,272 kb, max: 0 kb MX Compressed Class Space(Non-heap memory): used: 9,240 kb, init: 0 kb, committed: 9,600 kb, max: 1,048,576 kb MX PS Eden Space(Heap memory): used: 188,743 kb, init: 6,291,456 kb, committed: 6,291,456 kb, max: 6,291,456 kb MX PS Survivor Space(Heap memory): used: 0 kb, init: 1,048,576 kb, committed: 1,048,576 kb, max: 1,048,576 kb MX PS Old Gen(Heap memory): used: 157,226 kb, init: 16,777,216 kb, committed: 16,777,216 kb, max: 16,777,216 kb 13:55:33,098 INFO [org.radargun.RemoteMasterConnection] (main) Response successfully sent to the master 13:55:33,130 INFO [org.radargun.RemoteMasterConnection] (main) Response successfully sent to the master 13:55:36,605 INFO [org.radargun.Slave] (main) Master shutdown! 13:55:36,608 INFO [org.radargun.ShutDownHook] (Thread-0) Slave process is being shutdown Heap PSYoungGen total 7340032K, used 223060K [0x00000005c0000000, 0x00000007c0000000, 0x00000007c0000000) eden space 6291456K, 3% used [0x00000005c0000000,0x00000005cd9d5390,0x0000000740000000) from space 1048576K, 0% used [0x0000000780000000,0x0000000780000000,0x00000007c0000000) to space 1048576K, 0% used [0x0000000740000000,0x0000000740000000,0x0000000780000000) ParOldGen total 16777216K, used 157226K [0x00000001c0000000, 0x00000005c0000000, 0x00000005c0000000) object space 16777216K, 0% used [0x00000001c0000000,0x00000001c998ab68,0x00000005c0000000) Metaspace used 60043K, capacity 61030K, committed 61272K, reserved 1101824K class space used 9249K, capacity 9474K, committed 9600K, reserved 1048576K