-
Bug
-
Resolution: Done-Errata
-
Undefined
-
None
-
4.14
-
None
-
+
-
Critical
-
No
-
CNF Network Sprint 245
-
1
-
False
-
-
This is a clone of issue OCPBUGS-18461. The following is the description of the original issue:
—
Description of problem:
The metallb CR cannot be successfully installed, 4/6 containers come up but remaining restart.
Version-Release number of selected component (if applicable):
FIPS Enabled BM Cluster oc version Client Version: 4.13.0 Kustomize Version: v4.5.7 Server Version: 4.14.0-0.nightly-2023-08-28-154013 Kubernetes Version: v1.27.4+d424288
How reproducible:
Always
Steps to Reproduce:
1.Install metallb operator 2. Create MetalLB CR. 3.
Actual results:
oc get csv -n metallb-system NAME DISPLAY VERSION REPLACES PHASE metallb-operator.v4.14.0-202308311525 MetalLB Operator 4.14.0-202308311525 Succeeded oc get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-7856d6577b-d92mp 2/2 Running 0 3m38s metallb-operator-controller-manager-6bdf456676-w8nsp 1/1 Running 0 48m metallb-operator-webhook-server-5c6cc76856-h9cf5 1/1 Running 0 48m speaker-4lf78 4/6 CrashLoopBackOff 6 (35s ago) 3m38s speaker-5t44j 4/6 CrashLoopBackOff 6 (47s ago) 3m38s speaker-6ptd7 4/6 CrashLoopBackOff 6 (42s ago) 3m38s speaker-frhh9 4/6 CrashLoopBackOff 6 (33s ago) 3m38s speaker-vz6dp 4/6 CrashLoopBackOff 6 (38s ago) 3m38s
Expected results:
metallb CR created successfully
Additional info:
oc get pod speaker-4lf78 -n metallb-system -oyaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: privileged creationTimestamp: "2023-09-01T16:31:53Z" generateName: speaker- labels: app: metallb component: speaker controller-revision-hash: 79ffd6d6df pod-template-generation: "1" name: speaker-4lf78 namespace: metallb-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: speaker uid: b375d4c2-f56c-4156-8680-7681fd09c22e resourceVersion: "117489" uid: 6bc3bc9c-5c43-450b-be27-397acfffa36c spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - worker-1 containers: - args: - --port=29150 - --log-level=info command: - /speaker env: - name: METALLB_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: METALLB_HOST valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP - name: METALLB_ML_BIND_ADDR valueFrom: fieldRef: apiVersion: v1 fieldPath: status.podIP - name: METALLB_ML_LABELS value: app=metallb,component=speaker - name: METALLB_ML_BIND_PORT value: "9122" - name: METALLB_ML_SECRET_KEY_PATH value: /etc/ml_secret_key - name: FRR_CONFIG_FILE value: /etc/frr_reloader/frr.conf - name: FRR_RELOADER_PID_FILE value: /etc/frr_reloader/reloader.pid - name: METALLB_BGP_TYPE value: frr image: registry.redhat.io/openshift4/metallb-rhel8@sha256:bde436b8dee5712c8f311d641e66a1ca05128cc69b73716e0d9cac182334e9d5 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /metrics port: monitoring scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: speaker ports: - containerPort: 29150 hostPort: 29150 name: monitoring protocol: TCP - containerPort: 9122 hostPort: 9122 name: memberlist-tcp protocol: TCP - containerPort: 9122 hostPort: 9122 name: memberlist-udp protocol: UDP readinessProbe: failureThreshold: 3 httpGet: path: /metrics port: monitoring scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_RAW drop: - ALL readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/ml_secret_key name: memberlist - mountPath: /etc/frr_reloader name: reloader - mountPath: /etc/metallb name: metallb-excludel2 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - command: - /bin/sh - -c - | /sbin/tini -- /usr/lib/frr/docker-start & attempts=0 until [[ -f /etc/frr/frr.log || $attempts -eq 60 ]]; do sleep 1 attempts=$(( $attempts + 1 )) done tail -f /etc/frr/frr.log env: - name: TINI_SUBREAPER value: "true" image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: 29151 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: frr resources: {} securityContext: capabilities: add: - NET_ADMIN - NET_RAW - SYS_ADMIN - NET_BIND_SERVICE startupProbe: failureThreshold: 30 httpGet: path: /livez port: 29151 scheme: HTTP periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/frr name: frr-sockets - mountPath: /etc/frr name: frr-conf - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - command: - /etc/frr_reloader/frr-reloader.sh image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imagePullPolicy: IfNotPresent name: reloader resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/frr name: frr-sockets - mountPath: /etc/frr name: frr-conf - mountPath: /etc/frr_reloader name: reloader - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - args: - --metrics-port=29151 command: - /etc/frr_metrics/frr-metrics image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imagePullPolicy: IfNotPresent name: frr-metrics ports: - containerPort: 29151 hostPort: 29151 name: monitoring protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/frr name: frr-sockets - mountPath: /etc/frr name: frr-conf - mountPath: /etc/frr_metrics name: metrics - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - args: - --logtostderr - --secure-listen-address=:9120 - --upstream=http://$(METALLB_HOST):29150/ - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - --tls-private-key-file=/etc/metrics/tls.key - --tls-cert-file=/etc/metrics/tls.crt env: - name: METALLB_HOST valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP image: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:a6e290274b006210fcae229500e65dbff595415ea3a71b5af21709536e8ddaa6 imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9120 hostPort: 9120 name: metricshttps protocol: TCP resources: requests: cpu: 10m memory: 20Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/metrics name: metrics-certs readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - args: - --logtostderr - --secure-listen-address=:9121 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - --upstream=http://$(METALLB_HOST):29151/ - --tls-private-key-file=/etc/metrics/tls.key - --tls-cert-file=/etc/metrics/tls.crt env: - name: METALLB_HOST valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP image: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:a6e290274b006210fcae229500e65dbff595415ea3a71b5af21709536e8ddaa6 imagePullPolicy: IfNotPresent name: kube-rbac-proxy-frr ports: - containerPort: 9121 hostPort: 9121 name: metricshttps protocol: TCP resources: requests: cpu: 10m memory: 20Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/metrics name: metrics-certs readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true imagePullSecrets: - name: speaker-dockercfg-p8cwl initContainers: - command: - /bin/sh - -c - cp -rLf /tmp/frr/* /etc/frr/ image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imagePullPolicy: IfNotPresent name: cp-frr-files resources: {} securityContext: runAsGroup: 101 runAsNonRoot: true runAsUser: 100 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/frr name: frr-startup - mountPath: /etc/frr name: frr-conf - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - command: - /bin/sh - -c - cp -f /frr-reloader.sh /etc/frr_reloader/ image: registry.redhat.io/openshift4/metallb-rhel8@sha256:bde436b8dee5712c8f311d641e66a1ca05128cc69b73716e0d9cac182334e9d5 imagePullPolicy: IfNotPresent name: cp-reloader resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/frr_reloader name: reloader - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true - command: - /bin/sh - -c - cp -f /frr-metrics /etc/frr_metrics/ image: registry.redhat.io/openshift4/metallb-rhel8@sha256:bde436b8dee5712c8f311d641e66a1ca05128cc69b73716e0d9cac182334e9d5 imagePullPolicy: IfNotPresent name: cp-metrics resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/frr_metrics name: metrics - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j5hrs readOnly: true nodeName: worker-1 nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: "" preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: speaker serviceAccountName: speaker shareProcessNamespace: true terminationGracePeriodSeconds: 0 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists - effect: NoSchedule key: node.kubernetes.io/network-unavailable operator: Exists volumes: - name: memberlist secret: defaultMode: 420 secretName: metallb-memberlist - configMap: defaultMode: 256 name: metallb-excludel2 name: metallb-excludel2 - emptyDir: {} name: frr-sockets - configMap: defaultMode: 420 name: frr-startup name: frr-startup - emptyDir: {} name: frr-conf - emptyDir: {} name: reloader - emptyDir: {} name: metrics - name: metrics-certs secret: defaultMode: 420 secretName: speaker-certs-secret - name: kube-api-access-j5hrs projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2023-09-01T16:31:58Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-09-01T16:31:53Z" message: 'containers with unready status: [frr frr-metrics]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-09-01T16:31:53Z" message: 'containers with unready status: [frr frr-metrics]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-09-01T16:31:53Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://10f7b818687ed0f213ac199d7951944cd76f10eb183515ee42f79dc8e162c018 image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imageID: registry.redhat.io/openshift4/frr-rhel9@sha256:49e45547ef4a477f54f4011da201e40db5ddb86ad65e18eefb42059194b1c5d1 lastState: terminated: containerID: cri-o://af87f1f4dc153a589c0a12ee0f182d576916facb731848761eccec6391570c6e exitCode: 143 finishedAt: "2023-09-01T16:34:25Z" reason: Error startedAt: "2023-09-01T16:31:58Z" name: frr ready: false restartCount: 1 started: false state: running: startedAt: "2023-09-01T16:34:25Z" - containerID: cri-o://9a5e3b4a63f755df21a10ece1983cc3b9d017597f7919cfabaedb2104b01c8dc image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imageID: registry.redhat.io/openshift4/frr-rhel9@sha256:49e45547ef4a477f54f4011da201e40db5ddb86ad65e18eefb42059194b1c5d1 lastState: terminated: containerID: cri-o://9a5e3b4a63f755df21a10ece1983cc3b9d017597f7919cfabaedb2104b01c8dc exitCode: 1 finishedAt: "2023-09-01T16:34:56Z" reason: Error startedAt: "2023-09-01T16:34:56Z" name: frr-metrics ready: false restartCount: 5 started: false state: waiting: message: back-off 2m40s restarting failed container=frr-metrics pod=speaker-4lf78_metallb-system(6bc3bc9c-5c43-450b-be27-397acfffa36c) reason: CrashLoopBackOff - containerID: cri-o://170fe7c42f98e47273f8237c45e3dd55143f37b5283df12994d70509abb2be56 image: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:a6e290274b006210fcae229500e65dbff595415ea3a71b5af21709536e8ddaa6 imageID: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4a84b11877d13c89961871827a90b1a6ba95a2c9672dbe11c385f6e70bbcea66 lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2023-09-01T16:31:59Z" - containerID: cri-o://da0fac4d8b09129cae37867f353bd2661115f8381b777e51c092e0e98729e3cc image: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:a6e290274b006210fcae229500e65dbff595415ea3a71b5af21709536e8ddaa6 imageID: registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:4a84b11877d13c89961871827a90b1a6ba95a2c9672dbe11c385f6e70bbcea66 lastState: {} name: kube-rbac-proxy-frr ready: true restartCount: 0 started: true state: running: startedAt: "2023-09-01T16:32:00Z" - containerID: cri-o://1c538990bac89837c92fb9143d5eb7df8fa9231bd3a22ff5d80cbd7a826a3653 image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imageID: registry.redhat.io/openshift4/frr-rhel9@sha256:49e45547ef4a477f54f4011da201e40db5ddb86ad65e18eefb42059194b1c5d1 lastState: {} name: reloader ready: true restartCount: 0 started: true state: running: startedAt: "2023-09-01T16:31:58Z" - containerID: cri-o://e070d5bb65c28c3576f9954808b41850c893049fb117217e867c22e24f5864c3 image: registry.redhat.io/openshift4/metallb-rhel8@sha256:bde436b8dee5712c8f311d641e66a1ca05128cc69b73716e0d9cac182334e9d5 imageID: registry.redhat.io/openshift4/metallb-rhel8@sha256:8d6315af3813949f0b5bd6287ba1f9ec84ba396e5d46e4e92f38d5938ac0d5d7 lastState: {} name: speaker ready: true restartCount: 0 started: true state: running: startedAt: "2023-09-01T16:31:58Z" hostIP: 192.168.111.24 initContainerStatuses: - containerID: cri-o://0582fa02b6deff4775b207c721fcbb56833b59cf378833510790e159efbdff53 image: registry.redhat.io/openshift4/frr-rhel9@sha256:cd6635fb43e58f3642e7c623b7dc8d86124d27a078a0e430a24be0031dd952ea imageID: registry.redhat.io/openshift4/frr-rhel9@sha256:49e45547ef4a477f54f4011da201e40db5ddb86ad65e18eefb42059194b1c5d1 lastState: {} name: cp-frr-files ready: true restartCount: 0 state: terminated: containerID: cri-o://0582fa02b6deff4775b207c721fcbb56833b59cf378833510790e159efbdff53 exitCode: 0 finishedAt: "2023-09-01T16:31:55Z" reason: Completed startedAt: "2023-09-01T16:31:55Z" - containerID: cri-o://5477bb27fa0f155f19ba0f453d0ee588f86cb01bc518c642828d1b423e43634f image: registry.redhat.io/openshift4/metallb-rhel8@sha256:bde436b8dee5712c8f311d641e66a1ca05128cc69b73716e0d9cac182334e9d5 imageID: registry.redhat.io/openshift4/metallb-rhel8@sha256:8d6315af3813949f0b5bd6287ba1f9ec84ba396e5d46e4e92f38d5938ac0d5d7 lastState: {} name: cp-reloader ready: true restartCount: 0 state: terminated: containerID: cri-o://5477bb27fa0f155f19ba0f453d0ee588f86cb01bc518c642828d1b423e43634f exitCode: 0 finishedAt: "2023-09-01T16:31:56Z" reason: Completed startedAt: "2023-09-01T16:31:56Z" - containerID: cri-o://2040404aeb0582a068fa8560b76a4d36a71ec4df2c689518aaee4eee9a23ff75 image: registry.redhat.io/openshift4/metallb-rhel8@sha256:bde436b8dee5712c8f311d641e66a1ca05128cc69b73716e0d9cac182334e9d5 imageID: registry.redhat.io/openshift4/metallb-rhel8@sha256:8d6315af3813949f0b5bd6287ba1f9ec84ba396e5d46e4e92f38d5938ac0d5d7 lastState: {} name: cp-metrics ready: true restartCount: 0 state: terminated: containerID: cri-o://2040404aeb0582a068fa8560b76a4d36a71ec4df2c689518aaee4eee9a23ff75 exitCode: 0 finishedAt: "2023-09-01T16:31:57Z" reason: Completed startedAt: "2023-09-01T16:31:57Z" phase: Running podIP: 192.168.111.24 podIPs: - ip: 192.168.111.24 qosClass: Burstable startTime: "2023-09-01T16:31:53Z"
- clones
-
OCPBUGS-18461 Metallb CR can not be created on FIPS Enabled BM cluster
- Closed
- is blocked by
-
OCPBUGS-18461 Metallb CR can not be created on FIPS Enabled BM cluster
- Closed
- links to
-
RHBA-2023:7315 OpenShift Container Platform 4.14.z bug fix update