-
Bug
-
Resolution: Not a Bug
-
Major
-
None
-
4.16
-
No
-
Proposed
-
False
-
Description of problem:
Create a egressqos on OCP with 3 rules, one of the rules is not configured podselector, check egressqos address-set in ovnkube-node pod, can only find 2 entries. Edit egressqos and add podselector to the third rule, and the third address-set entry in ovnkube-node pod showes up.
Version-Release number of selected component (if applicable):
4.16
How reproducible:
always
Steps to Reproduce:
- create a egressqos in ns abc
% oc edit egressqos default -o yaml -n abc apiVersion: k8s.ovn.org/v1 kind: EgressQoS metadata: creationTimestamp: "2024-04-30T03:00:03Z" generation: 3 name: default namespace: abc resourceVersion: "1461503" uid: 24a6bcab-0d91-4fa9-8235-dd65e847cd19 spec: egress: - dscp: 30 dstCIDR: 18.118.137.0/24 podSelector: matchLabels: priority: Critical - dscp: 46 dstCIDR: 18.118.137.160/32 podSelector: matchLabels: name: test-pods - dscp: 10 dstCIDR: 0.0.0.0/0 status: {}
2. create 2 testpods which match the second and the third rules, the traffic from the 2 pods to external server have dscp marking correctly.
% cat testpod.yaml apiVersion: v1 kind: Pod metadata: name: testpod labels: name: test-pods spec: containers: - name: samplecontainer imagePullPolicy: IfNotPresent image: quay.io/openshifttest/hello-sdn@sha256:d5785550cf77b7932b090fcd1a2625472912fb3189d5973f177a5a2c347a1f95 % cat testpod1.yaml apiVersion: v1 kind: Pod metadata: name: testpod1 spec: containers: - name: samplecontainer imagePullPolicy: IfNotPresent image: quay.io/openshifttest/hello-sdn@sha256:d5785550cf77b7932b090fcd1a2625472912fb3189d5973f177a5a2c347a1f95
% oc get pods -o wide -n abc NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES testpod 1/1 Running 0 6h54m 10.131.0.28 ip-10-0-68-7.us-east-2.compute.internal <none> <none> testpod1 1/1 Running 0 95m 10.128.2.22 ip-10-0-59-175.us-east-2.compute.internal <none> <none>
3. Check address-set in ovnkube-node pods, can only find 2 entries for egressqos, and can not find the ip address of testpod1.
% oc rsh ovnkube-node-8flc6 Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init) sh-5.1# sh-5.1# ovn-nbctl find address_set external-ids:k8s.ovn.org/owner-type=EgressQoS _uuid : 13bac862-7eab-44c1-9779-e21d740684fc addresses : [] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:1000:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="1000"} name : a17159847422678949325 _uuid : 3159edab-d936-41b0-9290-288ca9b41b3a addresses : ["10.131.0.28"] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:999:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="999"} name : a13448932023089817269 sh-5.1# % oc rsh ovnkube-node-rjppc Defaulted container "ovn-controller" out of: ovn-controller, ovn-acl-logging, kube-rbac-proxy-node, kube-rbac-proxy-ovn-metrics, northd, nbdb, sbdb, ovnkube-controller, kubecfg-setup (init) sh-5.1# sh-5.1# sh-5.1# sh-5.1# ovn-nbctl find address_set external-ids:k8s.ovn.org/owner-type=EgressQoS _uuid : 0a1ea843-95a4-4a0c-a5a1-6f3585c41b6f addresses : [] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:1000:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="1000"} name : a17159847422678949325 _uuid : 8becc51e-595c-4ae2-b481-984b48f45ba6 addresses : [] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:999:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="999"} name : a13448932023089817269 sh-5.1#
4. edit egressqos and add podselector to the third rule, and check egressqos address_set again, the entry with priority 998 shows up.
% oc edit egressqos default -o yaml -n abc apiVersion: k8s.ovn.org/v1 kind: EgressQoS metadata: creationTimestamp: "2024-04-30T03:00:03Z" generation: 4 name: default namespace: abc resourceVersion: "1616453" uid: 24a6bcab-0d91-4fa9-8235-dd65e847cd19 spec: egress: - dscp: 30 dstCIDR: 18.118.137.0/24 podSelector: matchLabels: priority: Critical - dscp: 46 dstCIDR: 18.118.137.160/32 podSelector: matchLabels: name: test-pods - dscp: 10 dstCIDR: 0.0.0.0/0 podSelector: matchLabels: name: test-pods status: {}
sh-5.1# ovn-nbctl find address_set external-ids:k8s.ovn.org/owner-type=EgressQoS _uuid : 81084010-84f5-48ec-b363-721d5175ad45 addresses : [] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:998:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="998"} name : a3021313240758439040 _uuid : af3fed56-dcc8-46b7-ad87-1cb02c122e7f addresses : [] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:999:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="999"} name : a13448932023089817269 _uuid : f8c79a3f-cb30-42d5-a706-f6af3a4d078d addresses : [] external_ids : {ip-family=v4, "k8s.ovn.org/id"="default-network-controller:EgressQoS:abc:1000:v4", "k8s.ovn.org/name"=abc, "k8s.ovn.org/owner-controller"=default-network-controller, "k8s.ovn.org/owner-type"=EgressQoS, priority="1000"} name : a17159847422678949325 sh-5.1#
Actual results:
the default egressqos rule without podselector doesn't create address-set
Expected results:
All egressqos rule should have address-set entry.
Additional info:
% oc version Client Version: 4.16.0-0.nightly-2024-04-26-145258 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: 4.16.0-0.nightly-2024-04-29-154406 Kubernetes Version: v1.29.4+d1ec84a