-
Bug
-
Resolution: Done
-
Major
-
1.13.0.Final
-
None
-
None
-
False
-
False
-
This was integrated into https://issues.redhat.com/browse/FAI-602
-
This JIRA tracks work undertaken with the Quarkus team.
DevServices can be ran with either a shared or non-shared network.
Shared means Services started by DevServices (e.g. Kafka and PostgreSQL) are exposed to (only) Docker's container network; meaning they are not accessible from the local. Non-shared means the services are only exposed to the local network and not Docker's container network.
Trusty requires Quarkus's DevServices Kafka instance to be available to both local networks (e.g. the Quarkus application running locally with the tracing addon enabled) and Docker container networks (e.g. TrustyService running inside a Docker container, launched by the Kogito Runtime Tooling addon.
A PR has been submitted however it highlights and issue with Kafka too.
When a connection to Kafka is first attempted by the Kafka Client it attempts to download the broker meta-data. However, since we need two brokers to be defined (one for the local network and one for the Docker container network), it causes a duplicate key exception to be thrown since both are assigned a default ID of zero. If multiple brokers are available each normally receives a different ID however we only have one broker with, effectively, two aliases.
2021-11-25 15:24:09,125 ERROR [org.apa.kaf.cli.pro.int.Sender] (kafka-producer-network-thread | kafka-producer-kogito-tracing-decision) [Producer clientId=kafka-producer-kogito-tracing-decision] Uncaught error in kafka producer I/O thread: : java.lang.IllegalStateException: Duplicate key 0 (attempted merging values localhost:49209 (id: 0 rack: null) and kafka-ULge5:9092 (id: 0 rack: null)) at java.base/java.util.stream.Collectors.duplicateKeyException(Collectors.java:133) at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Collectors.java:180) at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.apache.kafka.common.requests.MetadataResponse$Holder.createBrokers(MetadataResponse.java:414) at org.apache.kafka.common.requests.MetadataResponse$Holder.<init>(MetadataResponse.java:407) at org.apache.kafka.common.requests.MetadataResponse.holder(MetadataResponse.java:187) at org.apache.kafka.common.requests.MetadataResponse.topicMetadata(MetadataResponse.java:210) at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.handleSuccessfulResponse(NetworkClient.java:1086) at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:887) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:570) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242) at java.base/java.lang.Thread.run(Thread.java:832)
- blocks
-
FAI-681 Integrate TrustyService with Quarkus DevServices services
- Done