-
Bug
-
Resolution: Duplicate
-
Undefined
-
None
-
4.14
-
None
-
Important
-
None
-
3
-
False
-
-
Description of problem:
Pod getting following error:
ns-hen06n3iwftc030-cmg 3m34s Warning FailedCreatePodSandBox pod/lmg-statefulset-0 (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_lmg-statefulset-0_ns-hen06n3iwftc030-cmg_b6e586fc-d5f6-4959-b02a-993f931d062a_0(2daa14e72176a72240f48dd1bb69f4817bc1c5b747fa5765bb4114ae3d36ed1a): error adding pod ns-hen06n3iwftc030-cmg_lmg-statefulset-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: '&{ContainerID:2daa14e72176a72240f48dd1bb69f4817bc1c5b747fa5765bb4114ae3d36ed1a Netns:/var/run/netns/d079d027-ab22-43c1-b3c5-9e5717d7c8ed IfName:eth0 Args:IgnoreUnknown=1;K8S_POD_NAMESPACE=ns-hen06n3iwftc030-cmg;K8S_POD_NAME=lmg-statefulset-0;K8S_POD_INFRA_CONTAINER_ID=2daa14e72176a72240f48dd1bb69f4817bc1c5b747fa5765bb4114ae3d36ed1a;K8S_POD_UID=b6e586fc-d5f6-4959-b02a-993f931d062a Path: StdinData:[123 34 98 105 110 68 105 114 34 58 34 47 118 97 114 47 108 105 98 47 99 110 105 47 98 105 110 34 44 34 99 104 114 111 111 116 68 105 114 34 58 34 47 104 111 115 116 114 111 111 116 34 44 34 99 108 117 115 116 101 114 78 101 116 119 111 114 107 34 58 34 47 104 111 115 116 47 114 117 110 47 109 117 108 116 117 115 47 99 110 105 47 110 101 116 46 100 47 49 48 45 111 118 110 45 107 117 98 101 114 110 101 116 101 115 46 99 111 110 102 34 44 34 99 110 105 67 111 110 102 105 103 68 105 114 34 58 34 47 104 111 115 116 47 101 116 99 47 99 110 105 47 110 101 116 46 100 34 44 34 99 110 105 86 101 114 115 105 111 110 34 58 34 48 46 51 46 49 34 44 34 100 97 101 109 111 110 83 111 99 107 101 116 68 105 114 34 58 34 47 114 117 110 47 109 117 108 116 117 115 47 115 111 99 107 101 116 34 44 34 103 108 111 98 97 108 78 97 109 101 115 112 97 99 101 115 34 58 34 100 101 102 97 117 108 116 44 111 112 101 110 115 104 105 102 116 45 109 117 108 116 117 115 44 111 112 101 110 115 104 105 102 116 45 115 114 105 111 118 45 110 101 116 119 111 114 107 45 111 112 101 114 97 116 111 114 34 44 34 108 111 103 76 101 118 101 108 34 58 34 118 101 114 98 111 115 101 34 44 34 108 111 103 84 111 83 116 100 101 114 114 34 58 116 114 117 101 44 34 109 117 108 116 117 115 65 117 116 111 99 111 110 102 105 103 68 105 114 34 58 34 47 104 111 115 116 47 114 117 110 47 109 117 108 116 117 115 47 99 110 105 47 110 101 116 46 100 34 44 34 109 117 108 116 117 115 67 111 110 102 105 103 70 105 108 101 34 58 34 97 117 116 111 34 44 34 110 97 109 101 34 58 34 109 117 108 116 117 115 45 99 110 105 45 110 101 116 119 111 114 107 34 44 34 110 97 109 101 115 112 97 99 101 73 115 111 108 97 116 105 111 110 34 58 116 114 117 101 44 34 112 101 114 78 111 100 101 67 101 114 116 105 102 105 99 97 116 101 34 58 123 34 98 111 111 116 115 116 114 97 112 75 117 98 101 99 111 110 102 105 103 34 58 34 47 118 97 114 47 108 105 98 47 107 117 98 101 108 101 116 47 107 117 98 101 99 111 110 102 105 103 34 44 34 99 101 114 116 68 105 114 34 58 34 47 101 116 99 47 99 110 105 47 109 117 108 116 117 115 47 99 101 114 116 115 34 44 34 99 101 114 116 68 117 114 97 116 105 111 110 34 58 34 50 52 104 34 44 34 101 110 97 98 108 101 100 34 58 116 114 117 101 125 44 34 115 111 99 107 101 116 68 105 114 34 58 34 47 104 111 115 116 47 114 117 110 47 109 117 108 116 117 115 47 115 111 99 107 101 116 34 44 34 116 121 112 101 34 58 34 109 117 108 116 117 115 45 115 104 105 109 34 125]} ContainerID:"2daa14e72176a72240f48dd1bb69f4817bc1c5b747fa5765bb4114ae3d36ed1a" Netns:"/var/run/netns/d079d027-ab22-43c1-b3c5-9e5717d7c8ed" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=ns-hen06n3iwftc030-cmg;K8S_POD_NAME=lmg-statefulset-0;K8S_POD_INFRA_CONTAINER_ID=2daa14e72176a72240f48dd1bb69f4817bc1c5b747fa5765bb4114ae3d36ed1a;K8S_POD_UID=b6e586fc-d5f6-4959-b02a-993f931d062a" Path:"" ERRORED: error configuring pod [ns-hen06n3iwftc030-cmg/lmg-statefulset-0] networking: [ns-hen06n3iwftc030-cmg/lmg-statefulset-0/b6e586fc-d5f6-4959-b02a-993f931d062a:dsf-net1]: error adding container to network "dsf-net1": SRIOV-CNI failed to configure VF "failed to set vf 3 vlan configuration - id 0, qos 0 and proto 802.1q: invalid argument"...
All SRIOV states show policies successfully configured:
[dasmall@supportshell-1 sriovnetworknodestates]$ grep -ir syncStatus nlhen06st1ow039.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st1ow039.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow019.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow019.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow020.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow020.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow021.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow021.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow022.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow022.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow023.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow023.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow025.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow025.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow026.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow026.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow027.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow027.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow048.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow048.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow049.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow049.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow050.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow050.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow051.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow051.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow052.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow052.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow053.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow053.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded nlhen06st3ow054.test.ci.internal.vodafone.nl.yaml: f:syncStatus: {} nlhen06st3ow054.test.ci.internal.vodafone.nl.yaml: syncStatus: Succeeded
Pods are requesting these sriov objects:
[dasmall@supportshell-1 03959540]$ oc get pods llb-statefulset-1 -o yaml | yq .spec.containers[].resources.requests | grep openshift.io openshift.io/n1p0: '2' openshift.io/n1p1: '2' [dasmall@supportshell-1 03959540]$ oc get pods llb-statefulset-0 -o yaml | yq .spec.containers[].resources.requests | grep openshift.io openshift.io/n1p0: '2' openshift.io/n1p1: '2' [dasmall@supportshell-1 03959540]$ oc get pods lmg-statefulset-0 -o yaml | yq .spec.containers[].resources.requests | grep openshift.io openshift.io/n1p0: '1' openshift.io/n1p1: '1'
Nodes appear to have those resources available/allocatable:
[dasmall@supportshell-1 03959540]$ oc get node nlhen06st3ow052.test.ci.internal.vodafone.nl -o yaml | yq .status.allocatable | grep openshift.io openshift.io/n0p0: '64' openshift.io/n0p1: '64' openshift.io/n1p0: '64' openshift.io/n1p1: '64' [dasmall@supportshell-1 03959540]$ oc get node nlhen06st3ow027.test.ci.internal.vodafone.nl -o yaml | yq .status.allocatable | grep openshift.io openshift.io/n0p0: '64' openshift.io/n0p1: '64' openshift.io/n1p0: '64' openshift.io/n1p1: '64' [dasmall@supportshell-1 03959540]$ oc get node nlhen06st3ow026.test.ci.internal.vodafone.nl -o yaml | yq .status.allocatable | grep openshift.io openshift.io/n0p0: '64' openshift.io/n0p1: '64' openshift.io/n1p0: '64' openshift.io/n1p1: '64'
and nodes show same 64 vfs in their /sys/class/net/{nic}/device/sriov_numvfs:
$ oc debug node/nlhen06st3ow052.test.ci.internal.vodafone.nl Starting pod/nlhen06st3ow052testciinternalvodafonenl-debug ... To use host binaries, run `chroot /host` Pod IP: 10.58.152.83 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# cat /sys/class/net/ens3f0np0/device/sriov_numvfs 64 sh-5.1# cat /sys/class/net/ens3f1np1/device/sriov_numvfs 64 [wscholte@rsd.test.internal.vodafone.nl@nlhen06sd1ts002 ~]$ oc debug node/nlhen06st3ow027.test.ci.internal.vodafone.nl Starting pod/nlhen06st3ow027testciinternalvodafonenl-debug ... To use host binaries, run `chroot /host` Pod IP: 10.58.152.78 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# cat /sys/class/net/ens3f0np0/device/sriov_numvfs 64 sh-5.1# cat /sys/class/net/ens3f1np1/device/sriov_numvfs 64 [wscholte@rsd.test.internal.vodafone.nl@nlhen06sd1ts002 ~]$ oc debug node/nlhen06st3ow026.test.ci.internal.vodafone.nl Starting pod/nlhen06st3ow026testciinternalvodafonenl-debug ... To use host binaries, run `chroot /host` Pod IP: 10.58.152.77 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# cat /sys/class/net/ens3f0np0/device/sriov_numvfs 64 sh-5.1# cat /sys/class/net/ens3f1np1/device/sriov_numvfs 64
Additional info:
support case 03959540 In that case, we have must-gather, sriov must-gather, sosreports from each of these three nodes, namespace inspect from application where error is seen