-
Bug
-
Resolution: Cannot Reproduce
-
Normal
-
None
-
4.12.z
-
None
-
No
-
3
-
PODAUTO - Sprint 251, PODAUTO - Sprint 253, PODAUTO - Sprint 255
-
3
-
False
-
Description of problem:
After upgrading the cluster to 4.12 and the Custom Metrics Autoscaler to v2.11.2-322, the CMA has stopped working. The logs show a complaint:
E0223 10:45:29.813667 1 provider.go:107] keda_metrics_adapter/provider "msg"="please specify scaledObject name, it needs to be set as value of label selector \"scaledobject.keda.sh/name\" on the query" "error"="scaledObject name is not specified"
There is just one scaledObject defined, which has the name set, and was working before the upgrade:
apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: argocd.argoproj.io/tracking-id: ocp-jc-te-v4-pre-01-elasticbeat-prod-bck:keda.sh/ScaledObject:elasticbeat-prod-bck/scaledobject-logstash-elastic-json kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"keda.sh/v1alpha1","kind":"ScaledObject","metadata":{"annotations":{"argocd.argoproj.io/tracking-id":"xxxx-elasticbeat-prod-bck:keda.sh/ScaledObject:elasticbeat-prod-bck/scaledobject-logstash-elastic-json"},"labels":{"app.kubernetes.io/instance":"xxxx-elasticbeat-prod-bck"},"name":"scaledobject-logstash-elastic-json","namespace":"elasticbeat-prod-bck"},"spec":{"advanced":{"horizontalPodAutoscalerConfig":{"behavior":{"scaleDown":{"policies":[{"periodSeconds":1800,"type":"Pods","value":1}],"stabilizationWindowSeconds":300},"scaleUp":{"policies":[{"periodSeconds":60,"type":"Pods","value":3}],"selectPolicy":"Max","stabilizationWindowSeconds":60}}},"restoreToOriginalReplicaCount":true},"cooldownPeriod":120,"maxReplicaCount":15,"minReplicaCount":2,"pollingInterval":660,"scaleTargetRef":{"apiVersion":"apps.openshift.io/v1","kind":"DeploymentConfig","name":"ocp-logstash-elastic-json"},"triggers":[{"authenticationRef":{"kind":"ClusterTriggerAuthentication","name":"keda-trigger-auth-prometheus-user-workload"},"metadata":{"authModes":"bearer","metricName":"consumergroup_json_lag","namespace":"elasticbeat-prod-bck","query":"sum(rate(kafka_consumergroup_lag_sum{consumergroup=\"logstash-json-group\"}[10m]))","serverAddress":"https://thanos-querier.openshift-monitoring.svc.cluster.local:9092","threshold":"50"},"type":"prometheus"}]}} creationTimestamp: "2023-10-31T13:19:38Z" finalizers: - finalizer.keda.sh generation: 3 labels: app.kubernetes.io/instance: xxxx-elasticbeat-prod-bck scaledobject.keda.sh/name: scaledobject-logstash-elastic-json name: scaledobject-logstash-elastic-json namespace: elasticbeat-prod-bck resourceVersion: "1216053636" uid: 8e898a57-5427-4263-9948-e99158bf57cf spec: advanced: horizontalPodAutoscalerConfig: behavior: scaleDown: policies: - periodSeconds: 1800 type: Pods value: 1 stabilizationWindowSeconds: 300 scaleUp: policies: - periodSeconds: 60 type: Pods value: 3 selectPolicy: Max stabilizationWindowSeconds: 60 restoreToOriginalReplicaCount: true cooldownPeriod: 120 maxReplicaCount: 15 minReplicaCount: 2 pollingInterval: 660 scaleTargetRef: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig name: ocp-logstash-elastic-json triggers: - authenticationRef: kind: ClusterTriggerAuthentication name: keda-trigger-auth-prometheus-user-workload metadata: authModes: bearer metricName: consumergroup_json_lag namespace: elasticbeat-prod-bck query: sum(rate(kafka_consumergroup_lag_sum{consumergroup="logstash-json-group"}[10m])) serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 threshold: "50" type: prometheus
Version-Release number of selected component (if applicable):
v2.11.2-322
How reproducible:
Ugrade CMA