Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-2520

Cleaning crio produces kubelet errors

XMLWordPrintable

    • None
    • False
    • Hide

      None

      Show
      None
    • If Release Note Needed, Set a Value
    • Set a Value
    • Customer Escalated

      Description of problem:

      If I have to make a crio cleaning process following this:
      
      https://docs.openshift.com/container-platform/4.10/support/troubleshooting/troubleshooting-crio-issues.html
      
      the kubelet will get some errors with some pods
      
      
      

       

       

      Version-Release number of selected component (if applicable):

      ocp 4.10

      How reproducible:

      Follow the procedure: https://docs.openshift.com/container-platform/4.10/support/troubleshooting/troubleshooting-crio-issues.html

      Steps to Reproduce:

      1.  cordon a worker
      $> oc adm cordon worker-0.el8k-ztp-1.hpecloud.org 
      node/worker-0.el8k-ztp-1.hpecloud.org cordoned 
      1. drain the worker

       

      > oc adm drain worker-0.el8k-ztp-1.hpecloud.org --ignore-daemonsets --delete-emptydir-data 
      node/worker-0.el8k-ztp-1.hpecloud.org already cordoned
      WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-jdlqq, openshift-dns/dns-default-47s42, openshift-dns/node-resolver-vm5v2, openshift-image-registry/node-ca-9fs4h, openshift-ingress-canary/ingress-canary-ck2jg, openshift-machine-config-operator/machine-config-daemon-9zdsw, openshift-monitoring/node-exporter-k7hrs, openshift-multus/multus-8pf46, openshift-multus/multus-additional-cni-plugins-pq7kb, openshift-multus/network-metrics-daemon-gk2sh, openshift-network-diagnostics/network-check-target-q4lhb, openshift-ovn-kubernetes/ovnkube-node-kq42w, openshift-ptp/linuxptp-daemon-87bqh, openshift-sriov-network-operator/sriov-network-config-daemon-s7m5s
      evicting pod open-cluster-management-agent/klusterlet-work-agent-6fb8db56c6-ckfl4
      evicting pod open-cluster-management-agent/klusterlet-registration-agent-84d6cb554b-85lc4
      evicting pod openshift-marketplace/e87e0640ee2264cc35ef2c1ed97be0a46a4ef9100d11a48b638a6ae158tbtd5
      evicting pod openshift-marketplace/d661dd28df06e41d7badaf307c5742fc9e10ffb0f717371a3be77ad8ackvpmp
      evicting pod openshift-monitoring/prometheus-k8s-1
      evicting pod openshift-operator-lifecycle-manager/collect-profiles-27756615-582pz
      evicting pod open-cluster-management-agent/klusterlet-registration-agent-84d6cb554b-shsm6
      evicting pod open-cluster-management-agent-addon/governance-policy-framework-656578c6d6-k9pcn
      evicting pod open-cluster-management-agent/klusterlet-work-agent-6fb8db56c6-4kdj9
      evicting pod openshift-monitoring/prometheus-adapter-8bc44b565-m2c6f
      evicting pod openshift-ingress/router-default-5c47d949df-zjwk5
      evicting pod openshift-marketplace/72ad2bad22e0747d66a0cc0ec31afee3f40f84e256d2c0c17d6369903ffq5kt
      evicting pod openshift-marketplace/5ae0fc37abddd09b5423b3a6ed1245a2aab9106478b24a2d51ee7cd878v4vkf
      evicting pod openshift-monitoring/thanos-querier-8574bbf668-hnn2t
      evicting pod open-cluster-management-agent-addon/klusterlet-addon-workmgr-785f549-qlz9j
      pod/5ae0fc37abddd09b5423b3a6ed1245a2aab9106478b24a2d51ee7cd878v4vkf evicted
      pod/e87e0640ee2264cc35ef2c1ed97be0a46a4ef9100d11a48b638a6ae158tbtd5 evicted
      pod/72ad2bad22e0747d66a0cc0ec31afee3f40f84e256d2c0c17d6369903ffq5kt evicted
      pod/d661dd28df06e41d7badaf307c5742fc9e10ffb0f717371a3be77ad8ackvpmp evicted
      pod/collect-profiles-27756615-582pz evicted
      I1010 12:50:07.152392   33827 request.go:665] Waited for 1.000279283s due to client-side throttling, not priority and fairness, request: POST:https://api.el8k-ztp-1.hpecloud.org:6443/api/v1/namespaces/open-cluster-management-agent/pods/klusterlet-registration-agent-84d6cb554b-85lc4/eviction
      pod/klusterlet-work-agent-6fb8db56c6-4kdj9 evicted
      pod/klusterlet-work-agent-6fb8db56c6-ckfl4 evicted
      pod/prometheus-adapter-8bc44b565-m2c6f evicted
      pod/klusterlet-registration-agent-84d6cb554b-85lc4 evicted
      pod/prometheus-k8s-1 evicted
      pod/governance-policy-framework-656578c6d6-k9pcn evicted
      pod/klusterlet-registration-agent-84d6cb554b-shsm6 evicted
      pod/thanos-querier-8574bbf668-hnn2t evicted
      pod/klusterlet-addon-workmgr-785f549-qlz9j evicted
      pod/router-default-5c47d949df-zjwk5 evicted
      node/worker-0.el8k-ztp-1.hpecloud.org evicted 
      1.  stop kubelet, and clean crio
      [root@worker-0 ~]# systemctl stop kubelet
      [root@worker-0 ~]# crictl rmp -fa
      Stopped sandbox 73f1fe0dcf4e76bfe2d09a6ce0eb634412412ee85078699daae785bf2feb4ec3
      Stopped sandbox 08cd6da07afdfac85dd5f286dd174e48013214ec3fe16536780d36f7bd20f478
      Stopped sandbox 20f53bd011e84ea2beb9972c8ee018a7e3c9595459477386a437ed4a395fef1d
      Stopped sandbox d618ecc7aec8a6e4faf4d9f7184d099e4b6a00a9b267f28a2a0049814673502b
      Stopped sandbox fb26be3e86316841682cbc1d090653b03b327cb260a24840454957fb3a9e85ef
      Stopped sandbox 9a10d52fda9fe07fd90949c9d2481e34220db4d78737028c72edfc8d47353d98
      Stopped sandbox eb7cb280d3daceaf56083f5a3d27f34ebcecb332dda4f79ec6c705dca8e1ab38
      Removed sandbox 20f53bd011e84ea2beb9972c8ee018a7e3c9595459477386a437ed4a395fef1d
      Removed sandbox 73f1fe0dcf4e76bfe2d09a6ce0eb634412412ee85078699daae785bf2feb4ec3
      Removed sandbox 9a10d52fda9fe07fd90949c9d2481e34220db4d78737028c72edfc8d47353d98
      Removed sandbox eb7cb280d3daceaf56083f5a3d27f34ebcecb332dda4f79ec6c705dca8e1ab38
      Removed sandbox d618ecc7aec8a6e4faf4d9f7184d099e4b6a00a9b267f28a2a0049814673502b
      Removed sandbox 08cd6da07afdfac85dd5f286dd174e48013214ec3fe16536780d36f7bd20f478
      Removed sandbox fb26be3e86316841682cbc1d090653b03b327cb260a24840454957fb3a9e85ef
      Stopped sandbox be3a7142586a1db6dbad8c72f7ca15e9fd2e86f7409785dd8dbc4a5f2e602266
      Stopped sandbox d4643eba7146e89d7c447125540ba35ec2246af5da27201693acf1d6ab563f75
      Stopped sandbox a2488d7e78261e28b44df4909ea8708c560c205965a63b51ebf814d72be4239b
      Removed sandbox d4643eba7146e89d7c447125540ba35ec2246af5da27201693acf1d6ab563f75
      Removed sandbox a2488d7e78261e28b44df4909ea8708c560c205965a63b51ebf814d72be4239b
      Removed sandbox be3a7142586a1db6dbad8c72f7ca15e9fd2e86f7409785dd8dbc4a5f2e602266
      Stopped sandbox 113815c1e11e371826feb880ca978cd7fe126bdfd3a79731b43fcf6ec0d4b170
      Removed sandbox 113815c1e11e371826feb880ca978cd7fe126bdfd3a79731b43fcf6ec0d4b170
      Stopped sandbox aca2d072e72518befd030037b0ee90b632c80c4cc4de06e31a0a79bfc8b8e9d0
      Removed sandbox aca2d072e72518befd030037b0ee90b632c80c4cc4de06e31a0a79bfc8b8e9d0
      stopping the pod sandbox "ae23fddcad20f190975afffafb7a2b0a09d75307cd870a771128f82c556a567f" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_network-check-target-q4lhb_openshift-network-diagnostics_9020fd5c-6f77-4d0f-ac05-ee9164da6e11_0(ae23fddcad20f190975afffafb7a2b0a09d75307cd870a771128f82c556a567f): error removing pod openshift-network-diagnostics_network-check-target-q4lhb from CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (delete): delegateDel: error invoking DelegateDel - "ovn-k8s-cni-overlay": error in getting result from DelNetwork: failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused
      stopping the pod sandbox "d3f6e044a8db57742318f7b08cadba542ceb5066a7ef9225382c4e33380d4b87" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_dns-default-47s42_openshift-dns_476c52e7-cc08-48d5-84f9-09e02dc9cc84_0(d3f6e044a8db57742318f7b08cadba542ceb5066a7ef9225382c4e33380d4b87): error removing pod openshift-dns_dns-default-47s42 from CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (delete): delegateDel: error invoking DelegateDel - "ovn-k8s-cni-overlay": error in getting result from DelNetwork: failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused
      stopping the pod sandbox "71eba57be398d9e05bf116f8a4cc6da629aaa14ccc907c79ed599eda4a7868c8" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_ingress-canary-ck2jg_openshift-ingress-canary_7a6524e1-7561-4cc7-89e5-76f259f32acb_0(71eba57be398d9e05bf116f8a4cc6da629aaa14ccc907c79ed599eda4a7868c8): error removing pod openshift-ingress-canary_ingress-canary-ck2jg from CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (delete): delegateDel: error invoking DelegateDel - "ovn-k8s-cni-overlay": error in getting result from DelNetwork: failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused
      stopping the pod sandbox "f344b4d695010ec8fdba56b3ff35fb23cf8bf73ee6454442a882616aea2b869d" failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_network-metrics-daemon-gk2sh_openshift-multus_9106be97-2586-4033-9ac8-e7507e6db948_0(f344b4d695010ec8fdba56b3ff35fb23cf8bf73ee6454442a882616aea2b869d): error removing pod openshift-multus_network-metrics-daemon-gk2sh from CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (delete): delegateDel: error invoking DelegateDel - "ovn-k8s-cni-overlay": error in getting result from DelNetwork: failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused
      1. these are the pods not deleted:

       

      [root@worker-0 ~]# crictl pods
      POD ID              CREATED             STATE               NAME                           NAMESPACE                       ATTEMPT             RUNTIME
      d3f6e044a8db5       About an hour ago   Ready               dns-default-47s42              openshift-dns                   0                   (default)
      71eba57be398d       About an hour ago   Ready               ingress-canary-ck2jg           openshift-ingress-canary        0                   (default)
      f344b4d695010       About an hour ago   Ready               network-metrics-daemon-gk2sh   openshift-multus                0                   (default)
      ae23fddcad20f       About an hour ago   Ready               network-check-target-q4lhb     openshift-network-diagnostics   0                   (default)
       

       

      1. wipe and start crio and kubelet

       

       

      root@worker-0 ~]# crio wipe -f
      INFO[2022-10-10 11:00:56.070613828Z] Starting CRI-O, version: 1.23.2-8.rhaos4.10.git8ad5d25.el8, git: () 
      INFO[2022-10-10 11:00:56.074752086Z] Internal wipe not set, meaning crio wipe will wipe. In the future, all wipes after reboot will happen when starting the crio server. 
      INFO[2022-10-10 11:00:56.078155353Z] Wiping containers                            
      INFO[2022-10-10 11:00:56.112267649Z] Deleted container ae23fddcad20f190975afffafb7a2b0a09d75307cd870a771128f82c556a567f 
      INFO[2022-10-10 11:00:56.140222855Z] Deleted container f344b4d695010ec8fdba56b3ff35fb23cf8bf73ee6454442a882616aea2b869d 
      INFO[2022-10-10 11:00:56.164197265Z] Deleted container 71eba57be398d9e05bf116f8a4cc6da629aaa14ccc907c79ed599eda4a7868c8 
      INFO[2022-10-10 11:00:56.189218701Z] Deleted container d3f6e044a8db57742318f7b08cadba542ceb5066a7ef9225382c4e33380d4b87 
      INFO[2022-10-10 11:00:56.209358281Z] Deleted container 1e87f96cd3a739dfe24ba9794128fc445b679d8ac22ae3968f1d44a736e262e4 
      INFO[2022-10-10 11:00:56.233239891Z] Deleted container 3b8a01ca7146ef94d5df7c3e11705c45e4ae379cf82c65a1e2a3d3abbdf1c1da 
      INFO[2022-10-10 11:00:56.253301668Z] Deleted container f7937d108128e6acb1929c537263b94640a6fae019df3f145e8026471197a25a 
      INFO[2022-10-10 11:00:56.273304826Z] Deleted container bc1fdcd0d7957fad14d07e635b456abde5a6bc57a87551ff6b084386cbfe96f5 
      INFO[2022-10-10 11:00:56.297301871Z] Deleted container fbb8f79a8f2d3dda48b7830a658d9ac56a47bc78fb0f2a43845245fe6c56ebeb 
      INFO[2022-10-10 11:00:56.318363548Z] Deleted container 080b887d99ff3a674337a1788f8b028d220031bcf38c95f3f4414daae38f380c 
      INFO[2022-10-10 11:00:56.318411991Z] Wiping images                                
      INFO[2022-10-10 11:00:56.337293162Z] Deleted image 0cdc4dd092f48d93d3248a0e0d9004ba30efb3ababf7c6bd35d92c1c4e8269c9 
      ERRO[2022-10-10 11:00:56.337881188Z] Unable to delete image 0cdc4dd092f48d93d3248a0e0d9004ba30efb3ababf7c6bd35d92c1c4e8269c9: identifier is not an image 
      ERRO[2022-10-10 11:00:56.337948972Z] Unable to delete image 0cdc4dd092f48d93d3248a0e0d9004ba30efb3ababf7c6bd35d92c1c4e8269c9: identifier is not an image 
      ERRO[2022-10-10 11:00:56.338005058Z] Unable to delete image 0cdc4dd092f48d93d3248a0e0d9004ba30efb3ababf7c6bd35d92c1c4e8269c9: identifier is not an image 
      INFO[2022-10-10 11:00:56.347659751Z] Deleted image ca29654379ff0964142fd04fbf0bf4e6a6328195bfaf76968d4a8e2b1ce27577 
      INFO[2022-10-10 11:00:56.357479790Z] Deleted image 55e294662f2c60c72271f4261f19b1be9c52515ffc76ba79f93bb44022fec181 
      INFO[2022-10-10 11:00:56.368732292Z] Deleted image 174eb5411b86f6e54e111e45a6c864b2b3d3bb344aee395a477b045832975826 
      INFO[2022-10-10 11:00:56.402073510Z] Deleted image 0c90509e9921497c98350bd8f46a886e8d9b662403f74dae9621567b9d16ce23 
      ERRO[2022-10-10 11:00:56.402397379Z] Unable to delete image 55e294662f2c60c72271f4261f19b1be9c52515ffc76ba79f93bb44022fec181: identifier is not an image 
      INFO[2022-10-10 11:00:56.411666732Z] Deleted image 58fbd478b7936c1c8a61b2626b29f4c871c4d61db97c5788fca56efedc09090a 
      [root@worker-0 ~]# systemctl start crio 
      [root@worker-0 ~]# systemctl start kubelet

       

      1. Errors from kubelet logs
      Oct 10 11:13:58 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: I1010 11:13:58.838068  134676 kubelet_getters.go:176] "Pod status updated" pod="openshift-kni-infra/coredns-worker-0.el8k-ztp-1.hpecloud.org" status=Running
      Oct 10 11:13:58 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: I1010 11:13:58.838125  134676 kubelet_getters.go:176] "Pod status updated" pod="openshift-kni-infra/keepalived-worker-0.el8k-ztp-1.hpecloud.org" status=Running
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.037734  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a2c094acc516ba5cec01cf8460bb05.slice/crio-113815c1e11e371826feb880ca978cd7fe126bdfd3a79731b43fcf6ec0d4b170.scope: Error finding container 113815c1e11e371826feb880ca978cd7fe126bdfd3a79731b43fcf6ec0d4b170: Status 404 returned error &{%!s(*http.body=&{0xc001b90df8 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.038557  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b86fc0c_29fb_4462_83d2_ad7e8abecc4d.slice/crio-08cd6da07afdfac85dd5f286dd174e48013214ec3fe16536780d36f7bd20f478.scope: Error finding container 08cd6da07afdfac85dd5f286dd174e48013214ec3fe16536780d36f7bd20f478: Status 404 returned error &{%!s(*http.body=&{0xc000fe53c8 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.039362  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9020fd5c_6f77_4d0f_ac05_ee9164da6e11.slice/crio-bc1fdcd0d7957fad14d07e635b456abde5a6bc57a87551ff6b084386cbfe96f5.scope: Error finding container bc1fdcd0d7957fad14d07e635b456abde5a6bc57a87551ff6b084386cbfe96f5: Status 404 returned error &{%!s(*http.body=&{0xc002e020c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.040085  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8964bd68_8345_475d_8be3_f0e705419a65.slice/crio-d618ecc7aec8a6e4faf4d9f7184d099e4b6a00a9b267f28a2a0049814673502b.scope: Error finding container d618ecc7aec8a6e4faf4d9f7184d099e4b6a00a9b267f28a2a0049814673502b: Status 404 returned error &{%!s(*http.body=&{0xc002e02108 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.040761  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9020fd5c_6f77_4d0f_ac05_ee9164da6e11.slice/crio-ae23fddcad20f190975afffafb7a2b0a09d75307cd870a771128f82c556a567f.scope: Error finding container ae23fddcad20f190975afffafb7a2b0a09d75307cd870a771128f82c556a567f: Status 404 returned error &{%!s(*http.body=&{0xc001b90e88 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.041568  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd09327_6df6_4f08_a264_a33292850671.slice/crio-be3a7142586a1db6dbad8c72f7ca15e9fd2e86f7409785dd8dbc4a5f2e602266.scope: Error finding container be3a7142586a1db6dbad8c72f7ca15e9fd2e86f7409785dd8dbc4a5f2e602266: Status 404 returned error &{%!s(*http.body=&{0xc001b90f00 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.042211  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dcf8a95_8afc_4ace_a783_6cf08fb7fbb2.slice/crio-9a10d52fda9fe07fd90949c9d2481e34220db4d78737028c72edfc8d47353d98.scope: Error finding container 9a10d52fda9fe07fd90949c9d2481e34220db4d78737028c72edfc8d47353d98: Status 404 returned error &{%!s(*http.body=&{0xc000fe5470 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.042702  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6247d83e_50b1_4722_a7da_6fe3373bce95.slice/crio-eb7cb280d3daceaf56083f5a3d27f34ebcecb332dda4f79ec6c705dca8e1ab38.scope: Error finding container eb7cb280d3daceaf56083f5a3d27f34ebcecb332dda4f79ec6c705dca8e1ab38: Status 404 returned error &{%!s(*http.body=&{0xc002e02198 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.043103  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9106be97_2586_4033_9ac8_e7507e6db948.slice/crio-f7937d108128e6acb1929c537263b94640a6fae019df3f145e8026471197a25a.scope: Error finding container f7937d108128e6acb1929c537263b94640a6fae019df3f145e8026471197a25a: Status 404 returned error &{%!s(*http.body=&{0xc000fe54b8 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.043800  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode632c179_ef5c_4021_a66a_816dfd58bda5.slice/crio-20f53bd011e84ea2beb9972c8ee018a7e3c9595459477386a437ed4a395fef1d.scope: Error finding container 20f53bd011e84ea2beb9972c8ee018a7e3c9595459477386a437ed4a395fef1d: Status 404 returned error &{%!s(*http.body=&{0xc000fe5500 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.044415  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8c980f6_d920_4d2b_b218_7bc02d2004dc.slice/crio-a2488d7e78261e28b44df4909ea8708c560c205965a63b51ebf814d72be4239b.scope: Error finding container a2488d7e78261e28b44df4909ea8708c560c205965a63b51ebf814d72be4239b: Status 404 returned error &{%!s(*http.body=&{0xc000fe5548 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.044759  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a432ba0abc09387dbb8cd7c2122738.slice/crio-aca2d072e72518befd030037b0ee90b632c80c4cc4de06e31a0a79bfc8b8e9d0.scope: Error finding container aca2d072e72518befd030037b0ee90b632c80c4cc4de06e31a0a79bfc8b8e9d0: Status 404 returned error &{%!s(*http.body=&{0xc001b90ff0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.045132  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476c52e7_cc08_48d5_84f9_09e02dc9cc84.slice/crio-d3f6e044a8db57742318f7b08cadba542ceb5066a7ef9225382c4e33380d4b87.scope: Error finding container d3f6e044a8db57742318f7b08cadba542ceb5066a7ef9225382c4e33380d4b87: Status 404 returned error &{%!s(*http.body=&{0xc000fe55c0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.045743  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24727384_7c62_477e_9129_2ebceccbe627.slice/crio-d4643eba7146e89d7c447125540ba35ec2246af5da27201693acf1d6ab563f75.scope: Error finding container d4643eba7146e89d7c447125540ba35ec2246af5da27201693acf1d6ab563f75: Status 404 returned error &{%!s(*http.body=&{0xc000fe5608 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.046279  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf395187c_3da4_4604_8b16_7029aac70988.slice/crio-fb26be3e86316841682cbc1d090653b03b327cb260a24840454957fb3a9e85ef.scope: Error finding container fb26be3e86316841682cbc1d090653b03b327cb260a24840454957fb3a9e85ef: Status 404 returned error &{%!s(*http.body=&{0xc0028f0db0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.051894  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6524e1_7561_4cc7_89e5_76f259f32acb.slice/crio-080b887d99ff3a674337a1788f8b028d220031bcf38c95f3f4414daae38f380c.scope: Error finding container 080b887d99ff3a674337a1788f8b028d220031bcf38c95f3f4414daae38f380c: Status 404 returned error &{%!s(*http.body=&{0xc00277a018 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.052642  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ea32469_5962_4b46_a31c_ee8027da691e.slice/crio-73f1fe0dcf4e76bfe2d09a6ce0eb634412412ee85078699daae785bf2feb4ec3.scope: Error finding container 73f1fe0dcf4e76bfe2d09a6ce0eb634412412ee85078699daae785bf2feb4ec3: Status 404 returned error &{%!s(*http.body=&{0xc000fe4018 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.053435  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476c52e7_cc08_48d5_84f9_09e02dc9cc84.slice/crio-1e87f96cd3a739dfe24ba9794128fc445b679d8ac22ae3968f1d44a736e262e4.scope: Error finding container 1e87f96cd3a739dfe24ba9794128fc445b679d8ac22ae3968f1d44a736e262e4: Status 404 returned error &{%!s(*http.body=&{0xc0024ce078 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.054086  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a6524e1_7561_4cc7_89e5_76f259f32acb.slice/crio-71eba57be398d9e05bf116f8a4cc6da629aaa14ccc907c79ed599eda4a7868c8.scope: Error finding container 71eba57be398d9e05bf116f8a4cc6da629aaa14ccc907c79ed599eda4a7868c8: Status 404 returned error &{%!s(*http.body=&{0xc001ae4060 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x859fa0) %!s(func() error=0x85a0a0)}
      Oct 10 11:13:59 worker-0.el8k-ztp-1.hpecloud.org bash[134676]: E1010 11:13:59.054602  134676 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9106be97_2586_4033_9ac8_e7507e6db948.slice/crio-f344b4d695010ec8fdba56b3ff35fb23cf8bf73ee6454442a882616aea2b869d.scope: Error finding container f344b4d695010ec8fdba56b3ff35fb23cf8bf73e 

       

      If I am not wrong (I have tested some of them), these errors came from the pods that were never drained, nor deleted (crictl rmp -fa):

      openshift-cluster-node-tuning-operator             tuned-jdlqq                                                       1/1     Running     1             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-dns                                      dns-default-47s42                                                 2/2     Running     2             21h     10.131.0.6     worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-dns                                      node-resolver-vm5v2                                               1/1     Running     1             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-image-registry                           node-ca-9fs4h                                                     1/1     Running     1             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-ingress-canary                           ingress-canary-ck2jg                                              1/1     Running     1             21h     10.131.0.7     worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-kni-infra                                coredns-worker-0.el8k-ztp-1.hpecloud.org                          2/2     Running     2             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-kni-infra                                keepalived-worker-0.el8k-ztp-1.hpecloud.org                       2/2     Running     2             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-machine-config-operator                  machine-config-daemon-9zdsw                                       2/2     Running     2             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-monitoring                               node-exporter-k7hrs                                               2/2     Running     2             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-multus                                   multus-8pf46                                                      1/1     Running     1             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-multus                                   multus-additional-cni-plugins-pq7kb                               1/1     Running     1             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-multus                                   network-metrics-daemon-gk2sh                                      2/2     Running     2             21h     10.131.0.3     worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-network-diagnostics                      network-check-target-q4lhb                                        1/1     Running     1             21h     10.131.0.4     worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-ovn-kubernetes                           ovnkube-node-kq42w                                                5/5     Running     5             21h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-ptp                                      linuxptp-daemon-87bqh                                             2/2     Running     2             20h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
      openshift-sriov-network-operator                   sriov-network-config-daemon-s7m5s                                 3/3     Running     3             20h     10.19.10.110   worker-0.el8k-ztp-1.hpecloud.org   <none>           <none>
       

       

       

      Actual results:

       

      Expected results:

       

      Additional info:

       

              pehunt@redhat.com Peter Hunt
              jgato@redhat.com Jose Gato Luis
              Sunil Choudhary Sunil Choudhary
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated:
                Resolved: