Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-2934

Ingress node firewall pod 's events container on the node causing pod in CrashLoopBackOff state when sctp module is loaded on node

    XMLWordPrintable

Details

    • SDN Sprint 226, SDN Sprint 227
    • 2
    • Rejected
    • False
    • Hide

      None

      Show
      None

    Description

      Description of problem:

      The daemon pods running perfectly starts restarting on the node that has sctp module loaded on the node.

      Version-Release number of selected component (if applicable):

      4.12

      How reproducible:

      Always

      Steps to Reproduce:

      1. Install the ingress node firewall operator and create a ingress node firewall config.
      oc get csv -n openshift-ingress-node-firewall
      NAME                                        DISPLAY                          VERSION               REPLACES   PHASE
      ingress-node-firewall.4.12.0-202210262313   ingress-node-firewall-operator   4.12.0-202210262313              Succeeded oc get pods -n openshift-ingress-node-firewall -owide
      NAME                                                       READY   STATUS    RESTARTS   AGE     IP               NODE                             NOMINATED NODE   READINESS GATES
      ingress-node-firewall-controller-manager-d6cb6c859-n2979   2/2     Running   0          7m57s   10.128.2.10      asood-10273-w6xnn-worker-mqzj7   <none>           <none>
      ingress-node-firewall-daemon-bwx6t                         3/3     Running   0          4m27s   172.31.249.38    asood-10273-w6xnn-worker-h5l2k   <none>           <none>
      ingress-node-firewall-daemon-d45j5                         3/3     Running   0          4m27s   172.31.249.163   asood-10273-w6xnn-worker-mqzj7   <none>           <none>
      
      
      
      2. Load the sctp module on the worker nodes and wait for the nodes to be return to ready state.
      --
      apiVersion: machineconfiguration.openshift.io/v1
      kind: MachineConfig
      metadata:
        labels:
          machineconfiguration.openshift.io/role: worker
        name: load-sctp-module
      spec:
        config:
          ignition:
            version: 2.2.0
          storage:
            files:
              - contents:
                  source: data:,
                  verification: {}
                filesystem: root
                mode: 420
                path: /etc/modprobe.d/sctp-blacklist.conf
              - contents:
                  source: data:text/plain;charset=utf-8,sctp
                filesystem: root
                mode: 420
                path: /etc/modules-load.d/sctp-load.conf
      
      [asood@asood ~]$ oc get nodes
      NAME                             STATUS                        ROLES                  AGE   VERSION
      asood-10273-w6xnn-master-0       Ready                         control-plane,master   75m   v1.25.2+4bd0702
      asood-10273-w6xnn-master-1       Ready                         control-plane,master   75m   v1.25.2+4bd0702
      asood-10273-w6xnn-master-2       Ready                         control-plane,master   75m   v1.25.2+4bd0702
      asood-10273-w6xnn-worker-h5l2k   NotReady,SchedulingDisabled   worker                 44m   v1.25.2+4bd0702
      asood-10273-w6xnn-worker-mqzj7   Ready                         worker                 44m   v1.25.2+4bd0702
      [asood@asood ~]$ oc get nodes
      NAME                             STATUS                     ROLES                  AGE   VERSION
      asood-10273-w6xnn-master-0       Ready                      control-plane,master   76m   v1.25.2+4bd0702
      asood-10273-w6xnn-master-1       Ready                      control-plane,master   76m   v1.25.2+4bd0702
      asood-10273-w6xnn-master-2       Ready                      control-plane,master   76m   v1.25.2+4bd0702
      asood-10273-w6xnn-worker-h5l2k   Ready                      worker                 46m   v1.25.2+4bd0702
      asood-10273-w6xnn-worker-mqzj7   Ready,SchedulingDisabled   worker                 46m   v1.25.2+4bd0702
      [asood@asood ~]$ oc get nodes
      NAME                             STATUS   ROLES                  AGE   VERSION
      asood-10273-w6xnn-master-0       Ready    control-plane,master   84m   v1.25.2+4bd0702
      asood-10273-w6xnn-master-1       Ready    control-plane,master   84m   v1.25.2+4bd0702
      asood-10273-w6xnn-master-2       Ready    control-plane,master   84m   v1.25.2+4bd0702
      asood-10273-w6xnn-worker-h5l2k   Ready    worker                 54m   v1.25.2+4bd0702
      asood-10273-w6xnn-worker-mqzj7   Ready    worker                 54m   v1.25.2+4bd0702
      
      [asood@asood ~]$ oc get pods -n openshift-ingress-node-firewall -owide
      NAME                                                       READY   STATUS    RESTARTS   AGE   IP               NODE                             NOMINATED NODE   READINESS GATES
      ingress-node-firewall-controller-manager-d6cb6c859-sv9vn   2/2     Running   0          27s   10.131.0.23      asood-10273-w6xnn-worker-h5l2k   <none>           <none>
      ingress-node-firewall-daemon-bwx6t                         2/3     Error     6          8m    172.31.249.38    asood-10273-w6xnn-worker-h5l2k   <none>           <none>
      ingress-node-firewall-daemon-d45j5                         3/3     Running   0          8m    172.31.249.163   asood-10273-w6xnn-worker-mqzj7   <none>           <none>
      [asood@asood ~]$ oc get pods -n openshift-ingress-node-firewall -owide
      NAME                                                       READY   STATUS             RESTARTS        AGE     IP               NODE                             NOMINATED NODE   READINESS GATES
      ingress-node-firewall-controller-manager-d6cb6c859-sv9vn   2/2     Running            0               8m58s   10.131.0.23      asood-10273-w6xnn-worker-h5l2k   <none>           <none>
      ingress-node-firewall-daemon-bwx6t                         2/3     CrashLoopBackOff   9 (3m33s ago)   16m     172.31.249.38    asood-10273-w6xnn-worker-h5l2k   <none>           <none>
      ingress-node-firewall-daemon-d45j5                         2/3     CrashLoopBackOff   8 (17s ago)     16m     172.31.249.163   asood-10273-w6xnn-worker-mqzj7   <none>           <none>
      
      
      

      Actual results:

      The ingress node firewall pods end up in crashloop state
      
      
      

      Expected results:

      The pods should continue to be in ready state.
      
      

      Additional info:

      The cluster is installed on vSphere. ipi-on-vsphere/versioned-installer-vmc7-ovn
      
      oc debug node/asood-10273-w6xnn-worker-h5l2k
      Starting pod/asood-10273-w6xnn-worker-h5l2k-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: 172.31.249.38
      If you don't see a command prompt, try pressing enter.
      sh-4.4# chroot /host
      sh-4.4# lsmod | grep sctp
      sh-4.4# exit
      exit
      sh-4.4# exit
      exit
      
      After loading SCTP module.
      oc debug node/asood-10273-w6xnn-worker-h5l2k
      Starting pod/asood-10273-w6xnn-worker-h5l2k-debug ...
      To use host binaries, run `chroot /host`
      Pod IP: 172.31.249.38
      If you don't see a command prompt, try pressing enter.
      sh-4.4# chroot /host
      sh-4.4# lsmod | grep sctp
      sctp                  421888  34
      ip6_udp_tunnel         16384  2 geneve,sctp
      udp_tunnel             20480  2 geneve,sctp
      libcrc32c              16384  6 nf_conntrack,nf_nat,openvswitch,nf_tables,xfs,sctp
      sh-4.4# 
      
      
      
      oc describe pod ingress-node-firewall-daemon-bwx6t -n openshift-ingress-node-firewall 
      Name:                 ingress-node-firewall-daemon-bwx6t
      Namespace:            openshift-ingress-node-firewall
      Priority:             2000001000
      Priority Class Name:  system-node-critical
      Node:                 asood-10273-w6xnn-worker-h5l2k/172.31.249.38
      Start Time:           Thu, 27 Oct 2022 17:52:12 -0400
      Labels:               app=ingress-node-firewall-daemon
                            component=daemon
                            controller-revision-hash=f7b68c595
                            pod-template-generation=1
                            type=infra
      Annotations:          openshift.io/scc: privileged
      Status:               Running
      IP:                   172.31.249.38
      IPs:
        IP:           172.31.249.38
      Controlled By:  DaemonSet/ingress-node-firewall-daemon
      Containers:
        daemon:
          Container ID:   cri-o://92896263dfba382986cedf2ae300632c0a55e97c41c90d6b9004ebfafc1112a3
          Image:          registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9
          Image ID:       registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9
          Port:           <none>
          Host Port:      <none>
          State:          Running
            Started:      Thu, 27 Oct 2022 17:59:25 -0400
          Ready:          True
          Restart Count:  1
          Environment:
            NODE_NAME:             (v1:spec.nodeName)
            NAMESPACE:            openshift-ingress-node-firewall (v1:metadata.namespace)
            POLL_PERIOD_SECONDS:  30
          Mounts:
            /sys/fs/bpf from bpf-maps (rw)
            /var/run from syslog-socket (rw)
            /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qczm (ro)
        events:
          Container ID:  cri-o://2686e8168b13d680e6358c378f6091874f6369f6028fd516d41b57d365e7e03f
          Image:         registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9
          Image ID:      registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9
          Port:          <none>
          Host Port:     <none>
          Command:
            /usr/bin/syslog
          State:          Waiting
            Reason:       CrashLoopBackOff
          Last State:     Terminated
            Reason:       Error
            Exit Code:    1
            Started:      Thu, 27 Oct 2022 18:10:17 -0400
            Finished:     Thu, 27 Oct 2022 18:10:17 -0400
          Ready:          False
          Restart Count:  8
          Requests:
            cpu:        100m
            memory:     256Mi
          Environment:  <none>
          Mounts:
            /var/run from syslog-socket (rw)
            /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qczm (ro)
        kube-rbac-proxy:
          Container ID:  cri-o://15b6df8abeb9595695200f10341e2cb8ccd6861497ef054a06e537cc771d1927
          Image:         registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:6ed81d739e83332a72459fe6b289b490bf53c3ca97d5af9fb34dbe98f7e99c6f
          Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ed81d739e83332a72459fe6b289b490bf53c3ca97d5af9fb34dbe98f7e99c6f
          Port:          9301/TCP
          Host Port:     9301/TCP
          Command:
            /bin/bash
            -c
            #!/bin/bash
            set -euo pipefail
            TLS_PK=/etc/pki/tls/metrics-certs/tls.key
            TLS_CERT=/etc/pki/tls/metrics-certs/tls.crt
            # As the secret mount is optional we must wait for the files to be present.
            # If it isn't created there is probably an issue so we want to crashloop.
            TS=$(date +%s)
            WARN_TS=$(( ${TS} + $(( 20 * 60)) ))
            HAS_LOGGED_INFO=0
            log_missing_certs(){
                CUR_TS=$(date +%s)
                if [[ "${CUR_TS}" -gt "${WARN_TS}"  ]]; then
                  echo $(date -Iseconds) WARN: ingress-node-firewall-daemon-metrics-certs not mounted after 20 minutes.
                elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then
                  echo $(date -Iseconds) INFO: ingress-node-firewall-daemon-metrics-certs not mounted. Waiting 20 minutes.
                  HAS_LOGGED_INFO=1
                fi
            }
            while [[ ! -f "${TLS_PK}" ||  ! -f "${TLS_CERT}" ]] ; do
              log_missing_certs
              sleep 5
            done
            echo $(date -Iseconds) INFO: ingress-node-firewall-daemon-metrics-certs mounted, starting kube-rbac-proxy
            exec /usr/bin/kube-rbac-proxy \
              --logtostderr \
              --secure-listen-address=:9301 \
              --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 \
              --upstream=http://127.0.0.1:39301 / \
              --tls-private-key-file=${TLS_PK} \
              --tls-cert-file=${TLS_CERT}
            
          State:          Running
            Started:      Thu, 27 Oct 2022 17:59:26 -0400
          Ready:          True
          Restart Count:  1
          Requests:
            cpu:        10m
            memory:     20Mi
          Environment:  <none>
          Mounts:
            /etc/pki/tls/metrics-certs from ingress-node-firewall-daemon-metrics-certs (ro)
            /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qczm (ro)
      Conditions:
        Type              Status
        Initialized       True 
        Ready             False 
        ContainersReady   False 
        PodScheduled      True 
      Volumes:
        bpf-maps:
          Type:          HostPath (bare host directory volume)
          Path:          /sys/fs/bpf
          HostPathType:  DirectoryOrCreate
        ingress-node-firewall-daemon-metrics-certs:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  ingress-node-firewall-daemon-metrics-certs
          Optional:    true
        syslog-socket:
          Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
          Medium:     
          SizeLimit:  <unset>
        kube-api-access-4qczm:
          Type:                    Projected (a volume that contains injected data from multiple sources)
          TokenExpirationSeconds:  3607
          ConfigMapName:           kube-root-ca.crt
          ConfigMapOptional:       <nil>
          DownwardAPI:             true
          ConfigMapName:           openshift-service-ca.crt
          ConfigMapOptional:       <nil>
      QoS Class:                   Burstable
      Node-Selectors:              node-role.kubernetes.io/worker=
      Tolerations:                 op=Exists
      Events:
        Type     Reason        Age                  From               Message
        ----     ------        ----                 ----               -------
        Normal   Scheduled     20m                  default-scheduler  Successfully assigned openshift-ingress-node-firewall/ingress-node-firewall-daemon-bwx6t to asood-10273-w6xnn-worker-h5l2k
        Normal   Pulling       20m                  kubelet            Pulling image "registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9"
        Normal   Pulled        20m                  kubelet            Successfully pulled image "registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9" in 3.678735249s
        Normal   Created       20m                  kubelet            Created container daemon
        Normal   Started       20m                  kubelet            Started container daemon
        Normal   Pulled        20m                  kubelet            Container image "registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9" already present on machine
        Normal   Created       20m                  kubelet            Created container events
        Normal   Started       20m                  kubelet            Started container events
        Normal   Pulling       20m                  kubelet            Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:6ed81d739e83332a72459fe6b289b490bf53c3ca97d5af9fb34dbe98f7e99c6f"
        Normal   Pulled        20m                  kubelet            Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:6ed81d739e83332a72459fe6b289b490bf53c3ca97d5af9fb34dbe98f7e99c6f" in 2.602971686s
        Normal   Created       20m                  kubelet            Created container kube-rbac-proxy
        Normal   Started       20m                  kubelet            Started container kube-rbac-proxy
        Warning  NodeNotReady  17m                  node-controller    Node is not ready
        Normal   Pulled        13m                  kubelet            Container image "registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9" already present on machine
        Normal   Created       13m                  kubelet            Created container daemon
        Normal   Started       13m                  kubelet            Started container daemon
        Normal   Pulled        13m                  kubelet            Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:6ed81d739e83332a72459fe6b289b490bf53c3ca97d5af9fb34dbe98f7e99c6f" already present on machine
        Normal   Created       13m                  kubelet            Created container kube-rbac-proxy
        Normal   Started       13m                  kubelet            Started container kube-rbac-proxy
        Normal   Pulled        12m (x4 over 13m)    kubelet            Container image "registry.redhat.io/openshift4/ingress-node-firewall-daemon@sha256:a7c7337ffcb9e17608a8d8bf701af9ad3e0c11e3a9b93fb912558cee1208c4f9" already present on machine
        Normal   Created       12m (x4 over 13m)    kubelet            Created container events
        Normal   Started       12m (x4 over 13m)    kubelet            Started container events
        Warning  BackOff       3m9s (x49 over 13m)  kubelet            Back-off restarting failed container
      
      oc logs ingress-node-firewall-controller-manager-d6cb6c859-sv9vn -n openshift-ingress-node-firewall 
      1.6669079899095576e+09    INFO    setup    Version    {"version.Version": "4.12.0"}
      I1027 21:59:50.960910       1 request.go:682] Waited for 1.037223877s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/console.openshift.io/v1?timeout=32s
      1.6669079922668066e+09    INFO    controller-runtime.metrics    Metrics server is starting to listen    {"addr": "127.0.0.1:39300"}
      1.666907992268518e+09    INFO    controller-runtime.builder    skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called    {"GVK": "ingressnodefirewall.openshift.io/v1alpha1, Kind=IngressNodeFirewall"}
      1.6669079922685819e+09    INFO    controller-runtime.builder    Registering a validating webhook    {"GVK": "ingressnodefirewall.openshift.io/v1alpha1, Kind=IngressNodeFirewall", "path": "/validate-ingressnodefirewall-openshift-io-v1alpha1-ingressnodefirewall"}
      1.666907992268735e+09    INFO    controller-runtime.webhook    Registering webhook    {"path": "/validate-ingressnodefirewall-openshift-io-v1alpha1-ingressnodefirewall"}
      1.6669079922691386e+09    INFO    platform    detecting platform version...
      1.666907992275584e+09    INFO    platform    route.openshift.io found in apis, platform is OpenShift
      1.6669079922756834e+09    INFO    platform    PlatformInfo [Name: OpenShift, K8SVersion: 1.25, OS: linux/amd64]
      1.6669079922757583e+09    INFO    setup    starting manager
      1.666907992276205e+09    INFO    controller-runtime.webhook.webhooks    Starting webhook server
      1.666907992276267e+09    INFO    Starting server    {"kind": "health probe", "addr": "[::]:8081"}
      1.6669079922762952e+09    INFO    Starting server    {"path": "/metrics", "kind": "metrics", "addr": "127.0.0.1:39300"}
      I1027 21:59:52.276877       1 leaderelection.go:248] attempting to acquire leader lease openshift-ingress-node-firewall/d902e78d.ingress-nodefw...
      1.6669079922770069e+09    INFO    controller-runtime.certwatcher    Updated current TLS certificate
      1.6669079922771842e+09    INFO    controller-runtime.webhook    Serving webhook server    {"host": "", "port": 9443}
      1.666907992277203e+09    INFO    controller-runtime.certwatcher    Starting certificate watcher
      I1027 22:00:10.113724       1 leaderelection.go:258] successfully acquired lease openshift-ingress-node-firewall/d902e78d.ingress-nodefw
      1.6669080101138086e+09    DEBUG    events    ingress-node-firewall-controller-manager-d6cb6c859-sv9vn_26498be8-e584-4131-8b0b-290c7a990f6b became leader    {"type": "Normal", "object": {"kind":"Lease","namespace":"openshift-ingress-node-firewall","name":"d902e78d.ingress-nodefw","uid":"18f2c4e6-ddf3-4644-9f70-88298a8db49d","apiVersion":"coordination.k8s.io/v1","resourceVersion":"60958"}, "reason": "LeaderElection"}
      1.666908010113973e+09    INFO    Starting EventSource    {"controller": "ingressnodefirewall", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewall", "source": "kind source: *v1alpha1.IngressNodeFirewall"}
      1.6669080101140242e+09    INFO    Starting EventSource    {"controller": "ingressnodefirewall", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewall", "source": "kind source: *v1.Node"}
      1.6669080101140344e+09    INFO    Starting EventSource    {"controller": "ingressnodefirewall", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewall", "source": "kind source: *v1alpha1.IngressNodeFirewallNodeState"}
      1.66690801011404e+09    INFO    Starting Controller    {"controller": "ingressnodefirewall", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewall"}
      1.666908010114034e+09    INFO    Starting EventSource    {"controller": "ingressnodefirewallconfig", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewallConfig", "source": "kind source: *v1alpha1.IngressNodeFirewallConfig"}
      1.6669080101140623e+09    INFO    Starting EventSource    {"controller": "ingressnodefirewallconfig", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewallConfig", "source": "kind source: *v1.DaemonSet"}
      1.6669080101140668e+09    INFO    Starting Controller    {"controller": "ingressnodefirewallconfig", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewallConfig"}
      1.6669080102193794e+09    INFO    Starting workers    {"controller": "ingressnodefirewall", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewall", "worker count": 1}
      1.6669080102194178e+09    INFO    Starting workers    {"controller": "ingressnodefirewallconfig", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewallConfig", "worker count": 1}
      1.6669080102196178e+09    INFO    controllers.IngressNodeFirewallConfig.syncIngressNodeFirewallConfigResources    Start
      2022/10/27 22:00:10 reconciling (apps/v1, Kind=DaemonSet) openshift-ingress-node-firewall/ingress-node-firewall-daemon
      2022/10/27 22:00:10 update was successful
      1.666908010260744e+09    INFO    controllers.IngressNodeFirewallConfig.syncIngressNodeFirewallConfigResources    Start
      2022/10/27 22:00:10 reconciling (apps/v1, Kind=DaemonSet) openshift-ingress-node-firewall/ingress-node-firewall-daemon
      2022/10/27 22:00:10 update was successful
      1.6669080103077493e+09    ERROR    controllers.IngressNodeFirewallConfig    Failed to update ingress node firewall config status    {"ingress node firewall config": "openshift-ingress-node-firewall/ingressnodefirewallconfig", "Desired status": "Available", "error": "could not update status for object &{TypeMeta:{Kind:IngressNodeFirewallConfig APIVersion:ingressnodefirewall.openshift.io/v1alpha1} ObjectMeta:{Name:ingressnodefirewallconfig GenerateName: Namespace:openshift-ingress-node-firewall SelfLink: UID:a31e4bc5-2262-41af-8262-8c0934be19e1 ResourceVersion:57963 Generation:1 CreationTimestamp:2022-10-27 21:52:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:kubectl-create Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:52:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:spec\":{\".\":{},\"f:nodeSelector\":{\".\":{},\"f:node-role.kubernetes.io/worker\":{}}}} Subresource:} {Manager:manager Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:55:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:status\":{\".\":{},\"f:conditions\":{}}} Subresource:status}]} Spec:{NodeSelector:map[node-role.kubernetes.io/worker:]} Status:{Conditions:[{Type:Available Status:True ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Available Message:} {Type:Progressing Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Progressing Message:} {Type:Degraded Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Degraded Message:}]}}: Operation cannot be fulfilled on ingressnodefirewallconfigs.ingressnodefirewall.openshift.io \"ingressnodefirewallconfig\": the object has been modified; please apply your changes to the latest version and try again", "errorVerbose": "Operation cannot be fulfilled on ingressnodefirewallconfigs.ingressnodefirewall.openshift.io \"ingressnodefirewallconfig\": the object has been modified; please apply your changes to the latest version and try again\ncould not update status for object &{TypeMeta:{Kind:IngressNodeFirewallConfig APIVersion:ingressnodefirewall.openshift.io/v1alpha1} ObjectMeta:{Name:ingressnodefirewallconfig GenerateName: Namespace:openshift-ingress-node-firewall SelfLink: UID:a31e4bc5-2262-41af-8262-8c0934be19e1 ResourceVersion:57963 Generation:1 CreationTimestamp:2022-10-27 21:52:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:kubectl-create Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:52:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:spec\":{\".\":{},\"f:nodeSelector\":{\".\":{},\"f:node-role.kubernetes.io/worker\":{}}}} Subresource:} {Manager:manager Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:55:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:status\":{\".\":{},\"f:conditions\":{}}} Subresource:status}]} Spec:{NodeSelector:map[node-role.kubernetes.io/worker:]} Status:{Conditions:[{Type:Available Status:True ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Available Message:} {Type:Progressing Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Progressing Message:} {Type:Degraded Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Degraded Message:}]}}\ngithub.com/openshift/ingress-node-firewall/pkg/status.Update\n\t/workspace/pkg/status/status.go:49\ngithub.com/openshift/ingress-node-firewall/controllers.(*IngressNodeFirewallConfigReconciler).Reconcile\n\t/workspace/controllers/ingressnodefirewallconfig_controller.go:115\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:234\nruntime.goexit\n\t/usr/lib/golang/src/runtime/asm_amd64.s:1594"}
      github.com/openshift/ingress-node-firewall/controllers.(*IngressNodeFirewallConfigReconciler).Reconcile
          /workspace/controllers/ingressnodefirewallconfig_controller.go:116
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:121
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:320
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:273
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:234
      1.666908010307969e+09    ERROR    Reconciler error    {"controller": "ingressnodefirewallconfig", "controllerGroup": "ingressnodefirewall.openshift.io", "controllerKind": "IngressNodeFirewallConfig", "IngressNodeFirewallConfig": {"name":"ingressnodefirewallconfig","namespace":"openshift-ingress-node-firewall"}, "namespace": "openshift-ingress-node-firewall", "name": "ingressnodefirewallconfig", "reconcileID": "75076a96-84cf-49a4-ac58-3be86f8f0a96", "error": "could not update status for object &{TypeMeta:{Kind:IngressNodeFirewallConfig APIVersion:ingressnodefirewall.openshift.io/v1alpha1} ObjectMeta:{Name:ingressnodefirewallconfig GenerateName: Namespace:openshift-ingress-node-firewall SelfLink: UID:a31e4bc5-2262-41af-8262-8c0934be19e1 ResourceVersion:57963 Generation:1 CreationTimestamp:2022-10-27 21:52:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:kubectl-create Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:52:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:spec\":{\".\":{},\"f:nodeSelector\":{\".\":{},\"f:node-role.kubernetes.io/worker\":{}}}} Subresource:} {Manager:manager Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:55:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:status\":{\".\":{},\"f:conditions\":{}}} Subresource:status}]} Spec:{NodeSelector:map[node-role.kubernetes.io/worker:]} Status:{Conditions:[{Type:Available Status:True ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Available Message:} {Type:Progressing Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Progressing Message:} {Type:Degraded Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Degraded Message:}]}}: Operation cannot be fulfilled on ingressnodefirewallconfigs.ingressnodefirewall.openshift.io \"ingressnodefirewallconfig\": the object has been modified; please apply your changes to the latest version and try again", "errorVerbose": "Operation cannot be fulfilled on ingressnodefirewallconfigs.ingressnodefirewall.openshift.io \"ingressnodefirewallconfig\": the object has been modified; please apply your changes to the latest version and try again\ncould not update status for object &{TypeMeta:{Kind:IngressNodeFirewallConfig APIVersion:ingressnodefirewall.openshift.io/v1alpha1} ObjectMeta:{Name:ingressnodefirewallconfig GenerateName: Namespace:openshift-ingress-node-firewall SelfLink: UID:a31e4bc5-2262-41af-8262-8c0934be19e1 ResourceVersion:57963 Generation:1 CreationTimestamp:2022-10-27 21:52:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:kubectl-create Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:52:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:spec\":{\".\":{},\"f:nodeSelector\":{\".\":{},\"f:node-role.kubernetes.io/worker\":{}}}} Subresource:} {Manager:manager Operation:Update APIVersion:ingressnodefirewall.openshift.io/v1alpha1 Time:2022-10-27 21:55:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:status\":{\".\":{},\"f:conditions\":{}}} Subresource:status}]} Spec:{NodeSelector:map[node-role.kubernetes.io/worker:]} Status:{Conditions:[{Type:Available Status:True ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Available Message:} {Type:Progressing Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Progressing Message:} {Type:Degraded Status:False ObservedGeneration:0 LastTransitionTime:2022-10-27 22:00:10.290982302 +0000 UTC m=+20.432947581 Reason:Degraded Message:}]}}\ngithub.com/openshift/ingress-node-firewall/pkg/status.Update\n\t/workspace/pkg/status/status.go:49\ngithub.com/openshift/ingress-node-firewall/controllers.(*IngressNodeFirewallConfigReconciler).Reconcile\n\t/workspace/controllers/ingressnodefirewallconfig_controller.go:115\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:121\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:320\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:234\nruntime.goexit\n\t/usr/lib/golang/src/runtime/asm_amd64.s:1594"}
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:326
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:273
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
          /workspace/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:234
      1.6669080103081396e+09    INFO    controllers.IngressNodeFirewallConfig.syncIngressNodeFirewallConfigResources    Start
      2022/10/27 22:00:10 reconciling (apps/v1, Kind=DaemonSet) openshift-ingress-node-firewall/ingress-node-firewall-daemon
      2022/10/27 22:00:10 update was successful
      1.666908010337311e+09    INFO    controllers.IngressNodeFirewallConfig.syncIngressNodeFirewallConfigResources    Start
      2022/10/27 22:00:10 reconciling (apps/v1, Kind=DaemonSet) openshift-ingress-node-firewall/ingress-node-firewall-daemon
      2022/10/27 22:00:10 update was successful
      1.6669081197691467e+09    INFO    controllers.IngressNodeFirewallConfig.syncIngressNodeFirewallConfigResources    Start
      2022/10/27 22:01:59 reconciling (apps/v1, Kind=DaemonSet) openshift-ingress-node-firewall/ingress-node-firewall-daemon
      2022/10/27 22:01:59 update was successful
      1.6669081197933285e+09    INFO    controllers.IngressNodeFirewallConfig.syncIngressNodeFirewallConfigResources    Start
      2022/10/27 22:01:59 reconciling (apps/v1, Kind=DaemonSet) openshift-ingress-node-firewall/ingress-node-firewall-daemon
      2022/10/27 22:01:59 update was successful
      
      
      
      

       

       

       

       

      Attachments

        Issue Links

          Activity

            People

              mmahmoud@redhat.com Mohamed Mahmoud
              rhn-support-asood Arti Sood
              Arti Sood Arti Sood
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: