Uploaded image for project: 'WildFly'
  1. WildFly
  2. WFLY-15094

Exposing infinispan metrics is slow

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Minor Minor
    • None
    • 23.0.2.Final, 24.0.1.Final
    • MP Metrics
    • None
    • Hide

      I was able to reproduce this with a clean wildfly24 installation with the microprofile-metrics-smallrye extenstion added to the standalone & deploying a wicket quickstart application with only hibernate added & 200 basic entities.

      I added the server to eclipse & added the myproject deployment, then called http://127.0.0.1:9990/metrics

       

      I also did a some profiling and one of the things which stood out was that not being able to find a capability service name is slow (about 20% of the total time). It's trying to find a capability but when it does not find it it throws an exception which takes some time to create (and this method is called a lot), and the exception is ignored anyways, so not the only cause but definitely something.

      Threaddump of the management thread during the request capturing where the calls are coming from:

      "management I/O-1" #72 prio=5 os_prio=0 cpu=24574.31ms elapsed=52520.41s tid=0x0000558c8efa6800 nid=0xd1 runnable  [0x00007fc20bb8b000]
         java.lang.Thread.State: RUNNABLE
              at java.lang.Throwable.fillInStackTrace(java.base@11.0.12/Native Method)
              at java.lang.Throwable.fillInStackTrace(java.base@11.0.12/Unknown Source)
              - locked <0x00000006bff6fba0> (a java.lang.IllegalStateException)
              at java.lang.Throwable.<init>(java.base@11.0.12/Unknown Source)
              at java.lang.Exception.<init>(java.base@11.0.12/Unknown Source)
              at java.lang.RuntimeException.<init>(java.base@11.0.12/Unknown Source)
              at java.lang.IllegalStateException.<init>(java.base@11.0.12/Unknown Source)
              at org.jboss.as.controller.logging.ControllerLogger_$logger.unknownCapability(ControllerLogger_$logger.java:2711)
              at org.jboss.as.controller.CapabilityRegistry.getCapabilityRegistration(CapabilityRegistry.java:945)
              at org.jboss.as.controller.CapabilityRegistry.getCapabilityServiceName(CapabilityRegistry.java:679)
              at org.jboss.as.controller.OperationContextImpl$CapabilityServiceSupportImpl.getCapabilityServiceName(OperationContextImpl.java:2602)
              at org.jboss.as.controller.OperationContextImpl$CapabilityServiceSupportImpl.getCapabilityServiceName(OperationContextImpl.java:2612)
              at org.jboss.as.clustering.controller.BinaryRequirementServiceNameFactory.getServiceName(BinaryRequirementServiceNameFactory.java:50)
              at org.jboss.as.clustering.controller.DefaultableBinaryServiceNameFactoryProvider.getServiceName(DefaultableBinaryServiceNameFactoryProvider.java:49)
              at org.jboss.as.clustering.controller.BinaryServiceNameFactory.getServiceName(BinaryServiceNameFactory.java:61)
              at org.jboss.as.clustering.infinispan.subsystem.CacheMetricExecutor.execute(CacheMetricExecutor.java:56)
              at org.jboss.as.clustering.infinispan.subsystem.CacheMetricExecutor.execute(CacheMetricExecutor.java:40)
              at org.jboss.as.clustering.controller.MetricHandler.executeRuntimeStep(MetricHandler.java:75)
      
      Show
      I was able to reproduce this with a clean wildfly24 installation with the microprofile-metrics-smallrye extenstion added to the standalone & deploying a wicket quickstart application with only hibernate added & 200 basic entities. I added the server to eclipse & added the myproject deployment, then called  http://127.0.0.1:9990/metrics   I also did a some profiling and one of the things which stood out was that not being able to find a capability service name is slow (about 20% of the total time). It's trying to find a capability but when it does not find it it throws an exception  which takes some time to create (and this method is called a lot), and the exception is ignored anyways , so not the only cause but definitely something. Threaddump of the management thread during the request capturing where the calls are coming from: "management I/O-1" #72 prio=5 os_prio=0 cpu=24574.31ms elapsed=52520.41s tid=0x0000558c8efa6800 nid=0xd1 runnable [0x00007fc20bb8b000] java.lang. Thread .State: RUNNABLE at java.lang.Throwable.fillInStackTrace(java.base@11.0.12/Native Method) at java.lang.Throwable.fillInStackTrace(java.base@11.0.12/Unknown Source) - locked <0x00000006bff6fba0> (a java.lang.IllegalStateException) at java.lang.Throwable.<init>(java.base@11.0.12/Unknown Source) at java.lang.Exception.<init>(java.base@11.0.12/Unknown Source) at java.lang.RuntimeException.<init>(java.base@11.0.12/Unknown Source) at java.lang.IllegalStateException.<init>(java.base@11.0.12/Unknown Source) at org.jboss.as.controller.logging.ControllerLogger_$logger.unknownCapability(ControllerLogger_$logger.java:2711) at org.jboss.as.controller.CapabilityRegistry.getCapabilityRegistration(CapabilityRegistry.java:945) at org.jboss.as.controller.CapabilityRegistry.getCapabilityServiceName(CapabilityRegistry.java:679) at org.jboss.as.controller.OperationContextImpl$CapabilityServiceSupportImpl.getCapabilityServiceName(OperationContextImpl.java:2602) at org.jboss.as.controller.OperationContextImpl$CapabilityServiceSupportImpl.getCapabilityServiceName(OperationContextImpl.java:2612) at org.jboss.as.clustering.controller.BinaryRequirementServiceNameFactory.getServiceName(BinaryRequirementServiceNameFactory.java:50) at org.jboss.as.clustering.controller.DefaultableBinaryServiceNameFactoryProvider.getServiceName(DefaultableBinaryServiceNameFactoryProvider.java:49) at org.jboss.as.clustering.controller.BinaryServiceNameFactory.getServiceName(BinaryServiceNameFactory.java:61) at org.jboss.as.clustering.infinispan.subsystem.CacheMetricExecutor.execute(CacheMetricExecutor.java:56) at org.jboss.as.clustering.infinispan.subsystem.CacheMetricExecutor.execute(CacheMetricExecutor.java:40) at org.jboss.as.clustering.controller.MetricHandler.executeRuntimeStep(MetricHandler.java:75)
    • Undefined

      When adding the microprofile-metrics-smallrye extension and subsystem with infinispan exposed the /metrics endpoint on the management port becomes slow (in our case ~3 seconds). This is an issue as there are only 2 management threads and can cause the health endpoints to be very slow aswell since they have to wait.

        1. myproject.zip
          122 kB
        2. standalone-2.xml
          31 kB

              jaslee@redhat.com Jason Lee
              snijderd@gmail.com Mark Snijder (Inactive)
              Votes:
              1 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: