-
Bug
-
Resolution: Done-Errata
-
Major
-
Logging 6.0.0
-
False
-
None
-
False
-
NEW
-
VERIFIED
-
Before this update, the PreferredScheduling annotation was missing on the collector pods. With this update, the PreferredScheduling annotation is added to the collector daemonset.
-
Bug Fix
-
-
-
-
Log Collection - Sprint 258, Log Collection - Sprint 259
-
Moderate
Description of problem:
On Spoke deployed with OCP 4.17.0-rc.0 with Telco DU profile applied, which includes workloadpartitioning enabled, the openshift logging collector pod is missing the following annotation: target.workload.openshift.io/management: '{"effect":"PreferredDuringScheduling"}' This annotation was present on at least OCP 4.17.0-0.nightly-2024-08-13-031847 timeframe (I don't have the logging operator version used then)
Version-Release number of selected component (if applicable):
cluster logging: v5.9.6-10 and v5.9.6-11 OCP 4.17.0-rc.0
How reproducible:
Always
Steps to Reproduce:
1. Deploy SNO spoke with Telco DU profile applied. 2. Examine pod spec for openshift-logging collector-xxxxx pod 3.
Actual results:
annotation is missing
Expected results:
annotation should be present
Additional info:
must-gather and other logs will be added in a comment. Podspec: --- apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["fd01:0:0:1::54/64"],"mac_address":"0a:58:ea:70:19:77","gateway_ips":["fd01:0:0:1::1"],"routes":[{"dest":"fd01::/48","nextHop":"fd01:0:0:1::1"},{"dest":"fd02::/112","nextHop":"fd01:0:0:1::1"},{"dest":"fd98::/64","nextHop":"fd01:0:0:1::1"}],"ip_address":"fd01:0:0:1::54/64","gateway_ip":"fd01:0:0:1::1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "fd01:0:0:1::54" ], "mac": "0a:58:ea:70:19:77", "default": true, "dns": {} }] logging.openshift.io/secret-hash: "5553989652960438002" openshift.io/scc: logging-scc seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: "2024-08-28T17:36:08Z" generateName: collector- labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: collector app.kubernetes.io/managed-by: cluster-logging-operator app.kubernetes.io/name: vector app.kubernetes.io/part-of: cluster-logging app.kubernetes.io/version: 5.9.0 component: collector controller-revision-hash: 7bcbf54d6b implementation: vector logging-infra: collector pod-security.kubernetes.io/enforce: privileged pod-template-generation: "2" provider: openshift security.openshift.io/scc.podSecurityLabelSync: "false" vector.dev/exclude: "true" name: collector-5b67x namespace: openshift-logging ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: collector uid: fe10755d-fb69-4c41-bf92-28a43ca0f17d resourceVersion: "13766" uid: 0a6fb570-cf08-4fdb-a9eb-a7d604732a2f spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - helix55.lab.eng.rdu2.redhat.com containers: - args: - /usr/bin/run-vector.sh command: - sh env: - name: COLLECTOR_CONF_HASH value: 46a90e90084e9610e8e6e42f9383a365 - name: TRUSTED_CA_HASH value: 72f205fe6e0fbe28eed65773e289544d - name: K8S_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: NODE_IPV4 valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP - name: OPENSHIFT_CLUSTER_ID value: 03e0c0e4-94c0-48db-8e05-93e3c2b4d908 - name: POD_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: POD_IPS valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIPs - name: VECTOR_LOG value: WARN - name: KUBERNETES_SERVICE_HOST value: kubernetes.default.svc - name: VECTOR_SELF_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: registry.redhat.io/openshift-logging/vector-rhel9@sha256:dc89e3132b026200b64ffa8472fe78d79ba556a6623fd4ae0c85c00f8188ff62 imagePullPolicy: IfNotPresent name: collector ports: - containerPort: 24231 name: metrics protocol: TCP resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/collector/metrics name: collector-metrics readOnly: true - mountPath: /tmp name: tmp - mountPath: /var/log/containers name: varlogcontainers readOnly: true - mountPath: /var/log/pods name: varlogpods readOnly: true - mountPath: /var/log/journal name: varlogjournal readOnly: true - mountPath: /var/log/audit name: varlogaudit readOnly: true - mountPath: /var/log/ovn name: varlogovn readOnly: true - mountPath: /var/log/oauth-apiserver name: varlogoauthapiserver readOnly: true - mountPath: /var/log/oauth-server name: varlogoauthserver readOnly: true - mountPath: /var/log/openshift-apiserver name: varlogopenshiftapiserver readOnly: true - mountPath: /var/log/kube-apiserver name: varlogkubeapiserver readOnly: true - mountPath: /etc/pki/ca-trust/extracted/pem/ name: collector-trusted-ca-bundle readOnly: true - mountPath: /etc/vector name: config readOnly: true - mountPath: /var/lib/vector name: datadir - mountPath: /usr/bin/run-vector.sh name: entrypoint readOnly: true subPath: run-vector.sh - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-kvhp6 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: helix55.lab.eng.rdu2.redhat.com nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: seccompProfile: type: RuntimeDefault serviceAccount: logcollector serviceAccountName: logcollector terminationGracePeriodSeconds: 10 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists volumes: - name: collector-metrics secret: defaultMode: 420 secretName: collector-metrics - emptyDir: medium: Memory name: tmp - hostPath: path: /var/log/containers type: "" name: varlogcontainers - hostPath: path: /var/log/pods type: "" name: varlogpods - hostPath: path: /var/log/journal type: "" name: varlogjournal - hostPath: path: /var/log/audit type: "" name: varlogaudit - hostPath: path: /var/log/ovn type: "" name: varlogovn - hostPath: path: /var/log/oauth-apiserver type: "" name: varlogoauthapiserver - hostPath: path: /var/log/oauth-server type: "" name: varlogoauthserver - hostPath: path: /var/log/openshift-apiserver type: "" name: varlogopenshiftapiserver - hostPath: path: /var/log/kube-apiserver type: "" name: varlogkubeapiserver - configMap: defaultMode: 420 items: - key: ca-bundle.crt path: tls-ca-bundle.pem name: collector-trusted-ca-bundle name: collector-trusted-ca-bundle - name: config secret: defaultMode: 420 items: - key: vector.toml path: vector.toml optional: true secretName: collector-config - hostPath: path: /var/lib/vector type: "" name: datadir - name: entrypoint secret: defaultMode: 420 items: - key: run-vector.sh path: run-vector.sh optional: true secretName: collector-config - name: kube-api-access-kvhp6 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2024-08-28T17:38:33Z" status: "True" type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: "2024-08-28T17:36:08Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2024-08-28T17:38:33Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2024-08-28T17:38:33Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2024-08-28T17:36:08Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://bd4fb7f505566a64e74a10f5b68c53d168964cb9318fc763c0d162888c4ecd80 image: registry.redhat.io/openshift-logging/vector-rhel9@sha256:dc89e3132b026200b64ffa8472fe78d79ba556a6623fd4ae0c85c00f8188ff62 imageID: registry.redhat.io/openshift-logging/vector-rhel9@sha256:962d4f20b28f2d72ef631e7fca1b1248b143f7f9f9d2df62e0a400cc6f00cc63 lastState: {} name: collector ready: true restartCount: 0 started: true state: running: startedAt: "2024-08-28T17:38:32Z" hostIP: 2620:52:0:800::1ff3 hostIPs: - ip: 2620:52:0:800::1ff3 phase: Running podIP: fd01:0:0:1::54 podIPs: - ip: fd01:0:0:1::54 qosClass: BestEffort startTime: "2024-08-28T17:36:08Z"
- clones
-
LOG-6023 logging collector pod missing PreferredScheduling annotation for WLP
- Closed
- links to
-
RHBA-2024:6693 Logging for Red Hat OpenShift - 6.0.0
- mentioned on