Uploaded image for project: 'OpenShift Monitoring'
  1. OpenShift Monitoring
  2. MON-1708

Enforce label scrape limits for UWM

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Normal Normal
    • openshift-4.11
    • None
    • None
    • False
    • False
    • NEW
    • Undefined
    • Sprint 216

      Following up on https://issues.redhat.com/browse/MON-1320, we added three new CLI flags to Prometheus to apply different limits on the samples' labels. These new flags are available starting from Prometheus v2.27.0, which will most likely be shipped in OpenShift 4.9.

      The limits that we want to look into for OCP are the following ones:

      # Per-scrape limit on number of labels that will be accepted for a sample. If
      # more than this number of labels are present post metric-relabeling, the
      # entire scrape will be treated as failed. 0 means no limit.
      [ label_limit: <int> | default = 0 ]
      
      # Per-scrape limit on length of labels name that will be accepted for a sample.
      # If a label name is longer than this number post metric-relabeling, the entire
      # scrape will be treated as failed. 0 means no limit.
      [ label_name_length_limit: <int> | default = 0 ]
      
      # Per-scrape limit on length of labels value that will be accepted for a sample.
      # If a label value is longer than this number post metric-relabeling, the
      # entire scrape will be treated as failed. 0 means no limit.
      [ label_value_length_limit: <int> | default = 0 ]
      

      We could benefit from them by setting relatively high values that could only induce unbound cardinality and thus reject the targets completely if they happened to breach our constrainst.

      DoD:

      • Being able to configure label scrape limits for UWM

            janantha@redhat.com Jayapriya Pai
            dgrisonn@redhat.com Damien Grisonnet
            Junqi Zhao Junqi Zhao
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: