Uploaded image for project: 'Connectivity Link'
  1. Connectivity Link
  2. CONNLINK-595

TokenRateLimitPolicy flapping between 'Accepted' and 'MissingDependency' on RC1

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • 1.2.1
    • 1.2.0
    • RHCL Operator
    • None

      We are experiencing an issue with our TokenRateLimitPolicy (TRLP) on the RHCL RC1 stack. The policy's status is "flashing" or "flapping" indefinitely between an Accepted/Enforced state and a MissingDependency state.

      When in the MissingDependency state, the condition message is: '[token rate limit policy validation has not finished] is not installed, please restart Kuadrant Operator pod once dependency has been installed'

      We observed similar behavior in the upstream project, but it typically resolved itself within 5-10 minutes. On RC1, this flapping state is continuous and indefinite, which is causing issues with the actual application of rate limiting. We are using the same installation pattern as our upstream deployment.

      Environment:

      • Product: RHCL RC1
      • Components: Kuadrant Operator, Limitador
      • Application: Upstream MaaS

      Steps to Reproduce:

      1. Deploy MaaS application on RHCL RC1 with Kuadrant.
      1. Apply the TokenRateLimitPolicy (configuration below) to the Gateway.
      1. Observe the status of the TRLP: oc get tokenratelimitpolicy gateway-token-rate-limits -n openshift-ingress -w

      Observed Results:

      • The status.conditions of the TokenRateLimitPolicy continuously cycle between Accepted: True/Enforced: True and reason: MissingDependency.
      • This flapping does not resolve on its own.
      • Rate limiting is not being applied consistently to API requests.

      Expected Results:

      • The TokenRateLimitPolicy should transition to Accepted: True / Enforced: True and remain in that state.
      • Rate limiting rules should be consistently enforced by Limitador.

      Additional Context - Operator Logs: During this time, we are also observing high-frequency, repetitive logging in the Kuadrant Operator Controller Manager, which may be related to this (or a separate) reconciliation loop:

       

      {{}}

      {"level":"info","ts":"2025-10-29T13:36:27Z","logger":"kuadrant-operator.LimitadorLimitsReconciler","msg":"limitador object is up to date, nothing to do","status":"skipping"}
      {"level":"info","ts":"2025-10-29T13:36:27Z","logger":"kuadrant-operator.LimitadorLimitsReconciler","msg":"Limitador limits reconciler","status":"completed"}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.event logger","msg":"new events","resources":["ConfigMap","WasmPlugin"],"eventTypes":{"update":2}}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.IstioExtensionReconciler.buildWasmConfigs","msg":"build Wasm configuration","status":"started"}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.IstioExtensionReconciler.buildWasmConfigs","msg":"build Wasm configuration","status":"completed"}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.event logger","msg":"new events","resources":["ConfigMap","WasmPlugin","TokenRateLimitPolicy"],"eventTypes":{"update":3}}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.LimitadorLimitsReconciler","msg":"Limitador limits reconciler","status":"started"}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.IstioExtensionReconciler.buildWasmConfigs","msg":"build Wasm configuration","status":"started"}
      {"level":"info","ts":"2025-10-29T13:36:28Z","logger":"kuadrant-operator.IstioExtensionReconciler.buildWasmConfigs","msg":"build Wasm configuration","status":"completed"} 

      {{}}

      Configuration: This is the TokenRateLimitPolicy that is flapping:

      {{}}

       
      apiVersion: kuadrant.io/v1alpha1
      kind: TokenRateLimitPolicy
      metadata:
        creationTimestamp: '2025-10-28T23:55:05Z'
        generation: 1
        name: gateway-token-rate-limits
        namespace: openshift-ingress
        resourceVersion: '627440'
        uid: 7d8cc5f0-efdc-4015-8c76-a3770d74b27a
      spec:
        limits:
          enterprise-user-tokens:
            counters:
              - expression: auth.identity.userid
            rates:
              - limit: 100000
                window: 1m
            when:
              - predicate: |
                  auth.identity.tier == "enterprise"
          free-user-tokens:
            counters:
              - expression: auth.identity.userid
            rates:
              - limit: 100
                window: 1m
            when:
              - predicate: |
                  auth.identity.tier == "free"
          premium-user-tokens:
            counters:
              - expression: auth.identity.userid
            rates:
              - limit: 50000
                window: 1m
            when:
              - predicate: |
                  auth.identity.tier == "premium"
        targetRef:
          group: gateway.networking.k8s.io
          kind: Gateway
          name: maas-default-gateway
      status:
        # Note: The status block below is just one snapshot.
        # The actual status is cycling between this and 'MissingDependency'.
        conditions:
          - lastTransitionTime: '2025-10-29T13:44:13Z'
            message: TokenRateLimitPolicy has been accepted
            reason: Accepted
            status: 'True'
            type: Accepted
          - lastTransitionTime: '2025-10-29T13:44:13Z'
            message: TokenRateLimitPolicy has been successfully enforced
            reason: Enforced
            status: 'True'
            type: Enforced
        observedGeneration: 1 

       

              jmadigan@redhat.com Jason Madigan
              jland@redhat.com Jamie Land
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: