Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-14092

[4.12] Object count quotas do not work for certain objects in ClusterResourceQuotas

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Normal Normal
    • 4.12.z
    • 4.13.0, 4.12.0, 4.11, 4.10.0
    • None
    • Moderate
    • No
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      Customer has noticed that object count quotas ("count/*") do not work for certain objects in ClusterResourceQuotas. For example, the following ResourceQuota works as expected:
      
      ~~~
      apiVersion: v1
      kind: ResourceQuota
      metadata:
      [..]
      spec:
        hard:
          count/routes.route.openshift.io: "900"
          count/servicemonitors.monitoring.coreos.com: "100"
          pods: "100"
      status:
        hard:
          count/routes.route.openshift.io: "900"
          count/servicemonitors.monitoring.coreos.com: "100"
          pods: "100"
        used:
          count/routes.route.openshift.io: "0"
          count/servicemonitors.monitoring.coreos.com: "1"
          pods: "4"
      ~~~
      
      However when using "count/servicemonitors.monitoring.coreos.com" in ClusterResourceQuotas, this does not work (note the missing "used"):
      
      ~~~
      apiVersion: quota.openshift.io/v1
      kind: ClusterResourceQuota
      metadata:
      [..]
      spec:
        quota:
          hard:
            count/routes.route.openshift.io: "900"
            count/servicemonitors.monitoring.coreos.com: "100"
            count/simon.krenger.ch: "100"
            pods: "100"
        selector:
          annotations:
            openshift.io/requester: kube:admin
      status:
        namespaces:
      [..]
        total:
          hard:
            count/routes.route.openshift.io: "900"
            count/servicemonitors.monitoring.coreos.com: "100"
            count/simon.krenger.ch: "100"
            pods: "100"
          used:
            count/routes.route.openshift.io: "0"
            pods: "4"
      ~~~
      
      This behaviour does not only apply to "servicemonitors.monitoring.coreos.com" objects, but also to other objects, such as:
      
      - count/kafkas.kafka.strimzi.io: '0' - count/prometheusrules.monitoring.coreos.com: '100' - count/servicemonitors.monitoring.coreos.com: '100' 
      
      The debug output for kube-controller-manager shows the following entries, which may or may not be related:
      
      ~~~
      $ oc logs kube-controller-manager-ip-10-0-132-228.eu-west-1.compute.internal | grep "servicemonitor" I0511 15:07:17.297620 1 patch_informers_openshift.go:90] Couldn't find informer for monitoring.coreos.com/v1, Resource=servicemonitors I0511 15:07:17.297630 1 resource_quota_monitor.go:181] QuotaMonitor using a shared informer for resource "monitoring.coreos.com/v1, Resource=servicemonitors" I0511 15:07:17.297642 1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for servicemonitors.monitoring.coreos.com [..] I0511 15:07:17.486279 1 patch_informers_openshift.go:90] Couldn't find informer for monitoring.coreos.com/v1, Resource=servicemonitors I0511 15:07:17.486297 1 graph_builder.go:176] using a shared informer for resource "monitoring.coreos.com/v1, Resource=servicemonitors", kind "monitoring.coreos.com/v1, Kind=ServiceMonitor" ~~~

      Version-Release number of selected component (if applicable):

      OpenShift Container Platform 4.12.15

      How reproducible:

      Always

      Steps to Reproduce:

      1. On an OCP 4.12 cluster, create the following ClusterResourceQuota:
      
      ~~~
      apiVersion: quota.openshift.io/v1
      kind: ClusterResourceQuota
      metadata:
        name: case-03509174
      spec:
        quota: 
          hard:
            count/servicemonitors.monitoring.coreos.com: "100"
            pods: "100"
        selector:
          annotations: 
            openshift.io/requester: "kube:admin"
      ~~~
      
      2. As "kubeadmin", create a new project and deploy one new ServiceMonitor, for example: 
      
      ~~~
      apiVersion: monitoring.coreos.com/v1
      kind: ServiceMonitor
      metadata:
        name: simon-servicemon-2
        namespace: simon-1
      spec:
        endpoints:
          - path: /metrics
            port: http
            scheme: http
        jobLabel: component
        selector:
          matchLabels:
            deployment: echoenv-1
      ~~~

      Actual results:

      The "used" field for ServiceMonitors is not populated in the ClusterResourceQuota for certain objects. It is unclear if these quotas are enforced or not

      Expected results:

      ClusterResourceQuota for ServiceMonitors is updated and enforced

      Additional info:

      * Must-gather for a cluster showing this behaviour (added debug for kube-controller-manager) is available here: https://drive.google.com/file/d/1ioEEHZQVHG46vIzDdNm6pwiTjkL9QQRE/view?usp=share_link
      * Slack discussion: https://redhat-internal.slack.com/archives/CKJR6200N/p1683876047243989

              fkrepins@redhat.com Filip Krepinsky
              rhn-support-skrenger Simon Krenger
              ying zhou ying zhou
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: