INFO[2022-11-15T05:15:00Z] ci-operator version v20221114-497644f12 INFO[2022-11-15T05:15:00Z] Loading configuration from https://config.ci.openshift.org for openshift/microshift@main INFO[2022-11-15T05:15:01Z] Resolved source https://github.com/openshift/microshift to main@1f9a5f2b, merging: #1115 2fc1bfa8 @microshift-rebase-script[bot] INFO[2022-11-15T05:15:01Z] Using namespace https://console-openshift-console.apps.build04.34d2.p2.openshiftapps.com/k8s/cluster/projects/ci-op-k5cwk1pv INFO[2022-11-15T05:15:01Z] Running [input:root], [input:cli], [input:libvirt-installer], [input:test-bin], [release-inputs], src, dependency-payload, microshift-validate, [output:stable:microshift-validate], [output:stable:dependency-payload], [images], [release:latest], e2e-openshift-conformance-sig-scheduling INFO[2022-11-15T05:15:02Z] Tagging ocp/4.12:libvirt-installer into pipeline:libvirt-installer. INFO[2022-11-15T05:15:02Z] Tagging ocp/4.12:cli into pipeline:cli. INFO[2022-11-15T05:15:02Z] Tagging ocp/4.12:tests into pipeline:test-bin. INFO[2022-11-15T05:15:02Z] Tagging openshift/release:rhel-8-release-golang-1.18-openshift-4.12 into pipeline:root. INFO[2022-11-15T05:15:02Z] Tagged shared images from ocp/4.12:${component}, images will be pullable from registry.build04.ci.openshift.org/ci-op-k5cwk1pv/stable:${component} INFO[2022-11-15T05:15:02Z] Building src INFO[2022-11-15T05:17:16Z] Build src succeeded after 2m14s INFO[2022-11-15T05:17:16Z] Building dependency-payload INFO[2022-11-15T05:17:16Z] Building microshift-validate INFO[2022-11-15T05:19:46Z] Build microshift-validate succeeded after 2m30s INFO[2022-11-15T05:19:46Z] Tagging microshift-validate into stable INFO[2022-11-15T05:25:06Z] Build dependency-payload succeeded after 7m50s INFO[2022-11-15T05:25:06Z] Tagging dependency-payload into stable INFO[2022-11-15T05:25:06Z] Creating release image registry.build04.ci.openshift.org/ci-op-k5cwk1pv/release:latest. INFO[2022-11-15T05:26:56Z] Snapshot integration stream into release 4.12.0-0.ci.test-2022-11-15-052506-ci-op-k5cwk1pv-latest to tag release:latest INFO[2022-11-15T05:26:56Z] Acquiring leases for test e2e-openshift-conformance-sig-scheduling: [gcp-quota-slice] INFO[2022-11-15T05:26:56Z] Acquired 1 lease(s) for gcp-quota-slice: [us-central1--gcp-quota-slice-66] INFO[2022-11-15T05:26:56Z] Running multi-stage test e2e-openshift-conformance-sig-scheduling INFO[2022-11-15T05:26:57Z] Running multi-stage phase pre INFO[2022-11-15T05:26:57Z] Running step e2e-openshift-conformance-sig-scheduling-ipi-install-rbac. INFO[2022-11-15T05:27:17Z] Step e2e-openshift-conformance-sig-scheduling-ipi-install-rbac succeeded after 20s. INFO[2022-11-15T05:27:17Z] Running step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-pre. INFO[2022-11-15T05:28:57Z] Step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-pre succeeded after 1m40s. INFO[2022-11-15T05:28:57Z] Running step e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-ssh. INFO[2022-11-15T05:29:27Z] Step e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-ssh succeeded after 30s. INFO[2022-11-15T05:29:27Z] Running step e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-pre-rpm-install. INFO[2022-11-15T05:36:28Z] Step e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-pre-rpm-install succeeded after 7m0s. INFO[2022-11-15T05:36:28Z] Running step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-add-disk. INFO[2022-11-15T05:37:08Z] Step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-add-disk succeeded after 40s. INFO[2022-11-15T05:37:08Z] Running step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-lvm. INFO[2022-11-15T05:37:58Z] Step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-lvm succeeded after 50s. INFO[2022-11-15T05:37:58Z] Running step e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-cluster-up. INFO[2022-11-15T05:48:48Z] Logs for container test in pod e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-cluster-up: INFO[2022-11-15T05:48:48Z] ServerAliveInterval 30 ServerAliveCountMax 1200 Activated service account credentials for: [do-not-delete-ci-provisioner@XXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com] Updated property [core/project]. Updated property [compute/zone]. Updated property [compute/region]. Warning: Permanently added 'compute.723320011055078814' (ECDSA) to the list of known hosts. + trap 'sudo journalctl -eu microshift' EXIT + sudo systemctl enable microshift --now Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service. This is rpm run ++ command -v podman + [[ -n '' ]] + echo 'This is rpm run' + sudo systemctl status microshift ● microshift.service - MicroShift Loaded: loaded (/usr/lib/systemd/system/microshift.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2022-11-15 05:39:07 UTC; 35ms ago Main PID: 62389 (microshift) Tasks: 27 (limit: 203640) Memory: 327.0M CPU: 13.930s CGroup: /system.slice/microshift.service └─62389 /usr/bin/microshift run Nov 15 05:39:03 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:03.061198 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:03 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:03.461897 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:03 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:03.587207 62389 apiserver.go:52] "Watching apiserver" Nov 15 05:39:04 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:04.262749 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:05 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:05.863612 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:07.569330 62389 kubelet.go:146] kubelet is ready Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? I1115 05:39:07.569405 62389 run.go:140] MicroShift is ready Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? I1115 05:39:07.570047 62389 run.go:145] sent sd_notify readiness message Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 systemd[1]: Started MicroShift. Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:07.607105 62389 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:07.607539 62389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" + sudo test -f /var/lib/microshift/resources/kubeadmin/kubeconfig + sudo ls -la /var/lib/microshift total 8 drwx------. 6 root root 71 Nov 15 05:39 . drwxr-xr-x. 40 root root 4096 Nov 15 05:38 .. drwxr-xr-x. 14 root root 4096 Nov 15 05:38 certs drwx------. 3 root root 20 Nov 15 05:38 etcd drwxr-xr-x. 3 root root 20 Nov 15 05:39 kubelet-plugins drwxr-xr-x. 8 root root 150 Nov 15 05:38 resources + sudo ls -la /var/lib/microshift/resources/kubeadmin/kubeconfig -rw-------. 1 root root 8870 Nov 15 05:38 /var/lib/microshift/resources/kubeadmin/kubeconfig + sudo journalctl -eu microshift -- Logs begin at Tue 2022-11-15 05:29:08 UTC, end at Tue 2022-11-15 05:39:07 UTC. -- Nov 15 05:38:27 release-ci-ci-op-k5cwk1pv-7cb14 systemd[1]: Starting MicroShift... Nov 15 05:38:28 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "admin-kubeconfig-signer" is greater than 5 years! Nov 15 05:38:28 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:28 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "system:admin" is greater than 2 years! Nov 15 05:38:28 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:29 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "service-ca" is greater than 5 years! Nov 15 05:38:29 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:30 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "ingress-ca" is greater than 5 years! Nov 15 05:38:30 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:30 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "kube-apiserver-external-signer" is greater than 5 years! Nov 15 05:38:30 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:30 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "kube-apiserver-localhost-signer" is greater than 5 years! Nov 15 05:38:30 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:31 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "kube-apiserver-service-network-signer" is greater than 5 years! Nov 15 05:38:31 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:31 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "etcd-signer" is greater than 5 years! Nov 15 05:38:31 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: Validity period of the certificate for "etcd" is greater than 2 years! Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: WARNING: By security reasons it is strongly recommended to change this period and make it smaller! Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? I1115 05:38:32.579569 62389 run.go:115] Starting MicroShift Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579790 62389 certchains.go:122] [admin-kubeconfig-signer] rotate at: 2031-11-18 05:38:28 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579800 62389 certchains.go:122] [admin-kubeconfig-signer admin-kubeconfig-client] rotate at: 2031-11-18 05:38:28 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579809 62389 certchains.go:122] [aggregator-signer] rotate at: 2023-07-18 05:38:29 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579815 62389 certchains.go:122] [aggregator-signer aggregator-client] rotate at: 2023-07-18 05:38:29 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579821 62389 certchains.go:122] [etcd-signer] rotate at: 2031-11-18 05:38:31 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579828 62389 certchains.go:122] [etcd-signer apiserver-etcd-client] rotate at: 2031-11-18 05:38:32 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579834 62389 certchains.go:122] [etcd-signer etcd-peer] rotate at: 2031-11-18 05:38:32 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579839 62389 certchains.go:122] [etcd-signer etcd-serving] rotate at: 2031-11-18 05:38:32 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579845 62389 certchains.go:122] [ingress-ca] rotate at: 2031-11-18 05:38:30 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579851 62389 certchains.go:122] [ingress-ca router-default-serving] rotate at: 2023-07-18 05:38:30 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579857 62389 certchains.go:122] [kube-apiserver-external-signer] rotate at: 2031-11-18 05:38:30 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579862 62389 certchains.go:122] [kube-apiserver-external-signer kube-external-serving] rotate at: 2023-07-18 05:38:30 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579869 62389 certchains.go:122] [kube-apiserver-localhost-signer] rotate at: 2031-11-18 05:38:31 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579875 62389 certchains.go:122] [kube-apiserver-localhost-signer kube-apiserver-localhost-serving] rotate at: 2023-07-18 05:38:31 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579880 62389 certchains.go:122] [kube-apiserver-service-network-signer] rotate at: 2031-11-18 05:38:31 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579886 62389 certchains.go:122] [kube-apiserver-service-network-signer kube-apiserver-service-network-serving] rotate at: 2023-07-18 05:38:31 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579891 62389 certchains.go:122] [kube-apiserver-to-kubelet-signer] rotate at: 2023-07-18 05:38:28 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579897 62389 certchains.go:122] [kube-apiserver-to-kubelet-signer kube-apiserver-to-kubelet-client] rotate at: 2023-07-18 05:38:28 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579902 62389 certchains.go:122] [kube-control-plane-signer] rotate at: 2023-07-18 05:38:27 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579908 62389 certchains.go:122] [kube-control-plane-signer kube-controller-manager] rotate at: 2023-07-18 05:38:27 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579913 62389 certchains.go:122] [kube-control-plane-signer kube-scheduler] rotate at: 2023-07-18 05:38:28 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579919 62389 certchains.go:122] [kubelet-signer] rotate at: 2023-07-18 05:38:28 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579924 62389 certchains.go:122] [kubelet-signer kube-csr-signer] rotate at: 2023-07-18 05:38:29 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579930 62389 certchains.go:122] [kubelet-signer kube-csr-signer kubelet-client] rotate at: 2023-07-18 05:38:29 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579936 62389 certchains.go:122] [kubelet-signer kube-csr-signer kubelet-server] rotate at: 2023-07-18 05:38:29 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579941 62389 certchains.go:122] [service-ca] rotate at: 2031-11-18 05:38:30 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? E1115 05:38:32.579947 62389 certchains.go:122] [service-ca route-controller-manager-serving] rotate at: 2023-07-18 05:38:30 +0000 UTC Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? I1115 05:38:32.580051 62389 run.go:126] Started service-manager Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: sysconfwatch-controller I1115 05:38:32.580128 62389 manager.go:114] Starting sysconfwatch-controller Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: etcd I1115 05:38:32.580129 62389 manager.go:114] Starting etcd Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: sysconfwatch-controller I1115 05:38:32.580159 62389 sysconfwatch_linux.go:89] starting sysconfwatch-controller with IP address "10.0.0.2" Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: sysconfwatch-controller I1115 05:38:32.580173 62389 sysconfwatch_linux.go:95] sysconfwatch-controller is ready Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.580Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.0.2:2380"]} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.581Z","caller":"embed/etcd.go:481","msg":"starting with peer TLS","tls-info":"cert = /var/lib/microshift/certs/etcd-signer/etcd-peer/peer.crt, key = /var/lib/microshift/certs/etcd-signer/etcd-peer/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/microshift/certs/etcd-signer/ca.crt, client-cert-auth = false, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"]} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.583Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://10.0.0.2:2379"]} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.583Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.18.4","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"release-ci-ci-op-k5cwk1pv-7cb14","data-dir":"/var/lib/microshift/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/microshift/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.0.0.2:2380"],"listen-peer-urls":["https://10.0.0.2:2380"],"advertise-client-urls":["https://10.0.0.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://10.0.0.2:2379"],"listen-metrics-urls":["https://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"release-ci-ci-op-k5cwk1pv-7cb14=https://10.0.0.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s","max-learners":1} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.587Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/microshift/etcd/member/snap/db","took":"3.732792ms"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.590Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"fde9dd315b6d0b2","cluster-id":"d159fca3fb051972"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 switched to configuration voters=()"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 became follower at term 0"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft fde9dd315b6d0b2 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 became follower at term 1"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.590Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 switched to configuration voters=(1143524885326647474)"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"warn","ts":"2022-11-15T05:38:32.592Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.594Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.595Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.596Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"fde9dd315b6d0b2","local-server-version":"3.5.3","cluster-version":"to_be_decided"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.596Z","caller":"etcdserver/server.go:745","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"fde9dd315b6d0b2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 switched to configuration voters=(1143524885326647474)"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.597Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d159fca3fb051972","local-member-id":"fde9dd315b6d0b2","added-peer-id":"fde9dd315b6d0b2","added-peer-peer-urls":["https://10.0.0.2:2380"],"added-peer-is-learner":false} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.598Z","caller":"embed/etcd.go:690","msg":"starting with client TLS","tls-info":"cert = /var/lib/microshift/certs/etcd-signer/etcd-serving/peer.crt, key = /var/lib/microshift/certs/etcd-signer/etcd-serving/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/microshift/certs/etcd-signer/ca.crt, client-cert-auth = false, crl-file = ","cipher-suites":["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256","TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305","TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"]} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.598Z","caller":"embed/etcd.go:583","msg":"serving peer traffic","address":"10.0.0.2:2380"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.598Z","caller":"embed/etcd.go:555","msg":"cmux::serve","address":"10.0.0.2:2380"} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.598Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"fde9dd315b6d0b2","initial-advertise-peer-urls":["https://10.0.0.2:2380"],"listen-peer-urls":["https://10.0.0.2:2380"],"advertise-client-urls":["https://10.0.0.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://10.0.0.2:2379"],"listen-metrics-urls":["https://127.0.0.1:2381"]} Nov 15 05:38:32 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:32.598Z","caller":"embed/etcd.go:765","msg":"serving metrics","address":"https://127.0.0.1:2381"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 is starting a new election at term 1"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 became pre-candidate at term 1"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 received MsgPreVoteResp from fde9dd315b6d0b2 at term 1"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 became candidate at term 2"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 received MsgVoteResp from fde9dd315b6d0b2 at term 2"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fde9dd315b6d0b2 became leader at term 2"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fde9dd315b6d0b2 elected leader fde9dd315b6d0b2 at term 2"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.492Z","caller":"etcdserver/server.go:2051","msg":"published local member to cluster through raft","local-member-id":"fde9dd315b6d0b2","local-member-attributes":"{Name:release-ci-ci-op-k5cwk1pv-7cb14 ClientURLs:[https://10.0.0.2:2379]}","request-path":"/0/members/fde9dd315b6d0b2/attributes","cluster-id":"d159fca3fb051972","publish-timeout":"7s"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.492Z","caller":"etcdserver/server.go:2516","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.492Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: etcd I1115 05:38:33.492083 62389 etcd.go:103] etcd is ready Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.492114 62389 manager.go:114] Starting kube-apiserver Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.492Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: Flag --openshift-config has been deprecated, to be removed Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.493Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.493Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.0.2:2379"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.493763 62389 kube-apiserver.go:305] "kube-apiserver" not yet ready: Get "https://127.0.0.1:6443/readyz": dial tcp 127.0.0.1:6443: connect: connection refused Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.493Z","caller":"membership/cluster.go:587","msg":"set initial cluster version","cluster-id":"d159fca3fb051972","local-member-id":"fde9dd315b6d0b2","cluster-version":"3.5"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: {"level":"info","ts":"2022-11-15T05:38:33.493Z","caller":"etcdserver/server.go:2540","msg":"cluster version is updated","cluster-version":"3.5"} Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: Flag --openshift-config has been deprecated, to be removed Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: Flag --enable-logs-handler has been deprecated, This flag will be removed in v1.19 Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: Flag --kubelet-read-only-port has been deprecated, kubelet-read-only-port is deprecated and will be removed. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.495702 62389 server.go:620] external host was not specified, using 10.0.0.2 Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.495928 62389 server.go:201] Version: v1.24.0 Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.495967 62389 server.go:203] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.904899 62389 shared_informer.go:255] Waiting for caches to sync for node_authorizer Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:33.906134 62389 admission.go:83] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.906278 62389 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.906302 62389 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.906309 62389 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.906462 62389 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.907230 62389 plugins.go:158] Loaded 18 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.907252 62389 plugins.go:161] Loaded 25 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionWebhook,ResourceQuota. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:33.920383 62389 admission.go:83] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.920461 62389 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.920472 62389 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.920479 62389 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.920617 62389 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.921575 62389 plugins.go:158] Loaded 18 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.921600 62389 plugins.go:161] Loaded 25 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionWebhook,ResourceQuota. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:33.948059 62389 genericapiserver.go:690] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:33 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:33.948818 62389 instance.go:261] Using reconciler: lease Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.139098 62389 instance.go:575] API group "internal.apiserver.k8s.io" is not enabled, skipping. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.454651 62389 genericapiserver.go:690] Skipping API authentication.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.456076 62389 genericapiserver.go:690] Skipping API authorization.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.459429 62389 genericapiserver.go:690] Skipping API autoscaling/v2beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.463416 62389 genericapiserver.go:690] Skipping API batch/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.465439 62389 genericapiserver.go:690] Skipping API certificates.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.467346 62389 genericapiserver.go:690] Skipping API coordination.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.467394 62389 genericapiserver.go:690] Skipping API discovery.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.471309 62389 genericapiserver.go:690] Skipping API networking.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.471333 62389 genericapiserver.go:690] Skipping API networking.k8s.io/v1alpha1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.472669 62389 genericapiserver.go:690] Skipping API node.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.472688 62389 genericapiserver.go:690] Skipping API node.k8s.io/v1alpha1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.472719 62389 genericapiserver.go:690] Skipping API policy/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.476908 62389 genericapiserver.go:690] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.476931 62389 genericapiserver.go:690] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.478923 62389 genericapiserver.go:690] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.478945 62389 genericapiserver.go:690] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.482940 62389 genericapiserver.go:690] Skipping API storage.k8s.io/v1alpha1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.486786 62389 genericapiserver.go:690] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.490648 62389 genericapiserver.go:690] Skipping API apps/v1beta2 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.490673 62389 genericapiserver.go:690] Skipping API apps/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.492746 62389 genericapiserver.go:690] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.494114 62389 genericapiserver.go:690] Skipping API events.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.495695 62389 admission.go:83] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.495804 62389 admission.go:47] Admission plugin "autoscaling.openshift.io/ClusterResourceOverride" is not configured so it will be disabled. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.495820 62389 admission.go:33] Admission plugin "autoscaling.openshift.io/RunOnceDuration" is not configured so it will be disabled. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.495830 62389 admission.go:32] Admission plugin "scheduling.openshift.io/PodNodeConstraints" is not configured so it will be disabled. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.496137 62389 endpoint_admission.go:33] Admission plugin "network.openshift.io/RestrictedEndpointsAdmission" is not configured so it will be disabled. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.497526 62389 plugins.go:158] Loaded 18 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,PodNodeSelector,Priority,DefaultTolerationSeconds,PodTolerationRestriction,PersistentVolumeLabel,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,scheduling.openshift.io/OriginPodNodeEnvironment,security.openshift.io/SecurityContextConstraint,security.openshift.io/DefaultSecurityContextConstraints,MutatingAdmissionWebhook. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:34.497547 62389 plugins.go:161] Loaded 25 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,PodNodeSelector,Priority,PodTolerationRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,scheduling.openshift.io/OriginPodNodeEnvironment,network.openshift.io/ExternalIPRanger,security.openshift.io/SecurityContextConstraint,security.openshift.io/SCCExecRestrictions,route.openshift.io/IngressAdmission,operator.openshift.io/ValidateDNS,security.openshift.io/ValidateSecurityContextConstraints,config.openshift.io/ValidateNetwork,config.openshift.io/ValidateAPIRequestCount,config.openshift.io/RestrictExtremeWorkerLatencyProfile,operator.openshift.io/ValidateKubeControllerManager,ValidatingAdmissionWebhook,ResourceQuota. Nov 15 05:38:34 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:34.523202 62389 genericapiserver.go:690] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.669537 62389 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.669652 62389 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.669704 62389 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.669950 62389 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.670058 62389 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.670383 62389 secure_serving.go:210] Serving securely on [::]:6443 Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.670446 62389 dynamic_serving_content.go:132] "Starting controller" name="sni-serving-cert::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt::/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.670624 62389 autoregister_controller.go:141] Starting autoregister controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.670642 62389 cache.go:32] Waiting for caches to sync for autoregister controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.670662 62389 tlsconfig.go:240] "Starting DynamicServingCertificateController" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.671117 62389 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.671139 62389 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.671169 62389 controller.go:80] Starting OpenAPI V3 AggregationController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.671866 62389 apiservice_controller.go:97] Starting APIServiceRegistrationController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.671884 62389 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.671964 62389 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/client-ca.crt" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.673277 62389 apf_controller.go:300] Starting API Priority and Fairness config controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.673317 62389 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.crt::/var/lib/microshift/certs/aggregator-signer/aggregator-client/client.key" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681070 62389 customresource_discovery_controller.go:209] Starting DiscoveryController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681037 62389 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/microshift/certs/aggregator-signer/ca.crt" Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681155 62389 clusterquotamapping.go:127] Starting ClusterQuotaMappingController controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681405 62389 controller.go:85] Starting OpenAPI controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681444 62389 controller.go:85] Starting OpenAPI V3 controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681804 62389 controller.go:83] Starting OpenAPI AggregationController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681841 62389 naming_controller.go:291] Starting NamingConditionController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681879 62389 establishing_controller.go:76] Starting EstablishingController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681917 62389 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681941 62389 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681963 62389 crd_finalizer.go:266] Starting CRDFinalizer Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.682209 62389 crdregistration_controller.go:112] Starting crd-autoregister controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.682226 62389 shared_informer.go:255] Waiting for caches to sync for crd-autoregister Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.681100 62389 available_controller.go:513] Starting AvailableConditionController Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.687079 62389 cache.go:32] Waiting for caches to sync for AvailableConditionController controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.691478 62389 kube-apiserver.go:305] "kube-apiserver" not yet ready: unknown Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:35.703200 62389 sdn_readyz_wait.go:102] api.openshift-oauth-apiserver.svc endpoints were not found Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:35.703433 62389 sdn_readyz_wait.go:102] api.openshift-apiserver.svc endpoints were not found Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:35.704937 62389 sdn_readyz_wait.go:138] api-openshift-oauth-apiserver-available did not find an openshift-oauth-apiserver endpoint Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.705855 62389 shared_informer.go:262] Caches are synced for node_authorizer Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:35.710354 62389 sdn_readyz_wait.go:138] api-openshift-apiserver-available did not find an openshift-apiserver endpoint Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.711729 62389 controller.go:616] quota admission added evaluator for: namespaces Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.770845 62389 cache.go:39] Caches are synced for autoregister controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.771186 62389 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.772540 62389 cache.go:39] Caches are synced for APIServiceRegistrationController controller Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.773351 62389 apf_controller.go:305] Running API Priority and Fairness config worker Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.782252 62389 shared_informer.go:262] Caches are synced for crd-autoregister Nov 15 05:38:35 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:35.787278 62389 cache.go:39] Caches are synced for AvailableConditionController controller Nov 15 05:38:36 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:36.443530 62389 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). Nov 15 05:38:36 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:36.494837 62389 kube-apiserver.go:305] "kube-apiserver" not yet ready: unknown Nov 15 05:38:36 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:36.676957 62389 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 Nov 15 05:38:36 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:36.680148 62389 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 Nov 15 05:38:36 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:36.680170 62389 storage_scheduling.go:111] all system priority classes are created successfully or already exist. Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:37.024186 62389 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:37.054668 62389 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:37.113956 62389 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1] Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:37.118314 62389 lease.go:250] Resetting endpoints for master service "kubernetes" to [10.0.0.2] Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:37.119618 62389 controller.go:616] quota admission added evaluator for: endpoints Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:37.122743 62389 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:37.495618 62389 kube-apiserver.go:319] "kube-apiserver" is ready Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:37.495679 62389 manager.go:114] Starting kube-controller-manager Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:37.495683 62389 manager.go:114] Starting kube-scheduler Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:37.495881 62389 manager.go:114] Starting openshift-crd-manager Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:37.508456 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab77b01f2845 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:APIServiceCreated,Message:Created APIService.apiregistration.k8s.io/v1.security.openshift.io because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:37.508347973 +0000 UTC m=+9.987312990,LastTimestamp:2022-11-15 05:38:37.508347973 +0000 UTC m=+9.987312990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:37.508520 62389 crd.go:155] Applying openshift CRD crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:37.778418 62389 serving.go:348] Generated self-signed cert in-memory Nov 15 05:38:37 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:37.829402 62389 serving.go:348] Generated self-signed cert in-memory Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler W1115 05:38:38.370223 62389 authentication.go:317] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler W1115 05:38:38.370254 62389 authentication.go:341] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler W1115 05:38:38.370267 62389 authorization.go:198] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:38.378392 62389 server.go:152] "Starting Kubernetes Scheduler" version="v1.24.0" Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:38.378423 62389 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:38.379319 62389 secure_serving.go:210] Serving securely on [::]:10259 Nov 15 05:38:38 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:38.379416 62389 tlsconfig.go:240] "Starting DynamicServingCertificateController" Nov 15 05:38:40 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:40.694009 62389 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:40 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:40.694057 62389 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:40 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:40.705229 62389 reflector.go:424] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io) Nov 15 05:38:40 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:40.705286 62389 reflector.go:140] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io) Nov 15 05:38:40 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:40.705238 62389 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:40 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:40.705315 62389 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:41 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:41.710770 62389 reflector.go:424] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io) Nov 15 05:38:41 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:41.710806 62389 reflector.go:140] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io) Nov 15 05:38:41 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:41.825472 62389 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:41 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:41.825534 62389 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:41 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:41.859171 62389 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:41 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:41.859210 62389 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:42 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-scheduler I1115 05:38:42.500443 62389 kube-scheduler.go:87] kube-scheduler is ready Nov 15 05:38:42 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:42.516939 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab78daa7583c dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:CustomResourceDefinitionCreated,Message:Created CustomResourceDefinition.apiextensions.k8s.io/rangeallocations.security.internal.openshift.io because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:42.516883516 +0000 UTC m=+14.995848525,LastTimestamp:2022-11-15 05:38:42.516883516 +0000 UTC m=+14.995848525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:42 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:42.516966 62389 crd.go:166] Applied openshift CRD crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml Nov 15 05:38:42 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:42.516977 62389 crd.go:155] Applying openshift CRD crd/0000_03_security-openshift_01_scc.crd.yaml Nov 15 05:38:43 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:43.516578 62389 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:43 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:43.516623 62389 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:44 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:44.515212 62389 reflector.go:424] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io) Nov 15 05:38:44 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:44.515261 62389 reflector.go:140] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io) Nov 15 05:38:44 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:44.864353 62389 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:44 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:44.864400 62389 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:47 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:47.527902 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7a055491e2 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:CustomResourceDefinitionCreated,Message:Created CustomResourceDefinition.apiextensions.k8s.io/securitycontextconstraints.security.openshift.io because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:47.52784637 +0000 UTC m=+20.006811381,LastTimestamp:2022-11-15 05:38:47.52784637 +0000 UTC m=+20.006811381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:47 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:47.527932 62389 crd.go:166] Applied openshift CRD crd/0000_03_security-openshift_01_scc.crd.yaml Nov 15 05:38:47 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:47.527944 62389 crd.go:155] Applying openshift CRD crd/route.crd.yaml Nov 15 05:38:48 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:48.716058 62389 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:48 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:48.716148 62389 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:49 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:49.726436 62389 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:49 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:49.726481 62389 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:52 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:52.539123 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7b3005b459 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:CustomResourceDefinitionCreated,Message:Created CustomResourceDefinition.apiextensions.k8s.io/routes.route.openshift.io because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:52.539065433 +0000 UTC m=+25.018030437,LastTimestamp:2022-11-15 05:38:52.539065433 +0000 UTC m=+25.018030437,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:52 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:52.539146 62389 crd.go:166] Applied openshift CRD crd/route.crd.yaml Nov 15 05:38:52 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:52.539156 62389 crd.go:155] Applying openshift CRD components/odf-lvm/topolvm.cybozu.com_logicalvolumes.yaml Nov 15 05:38:55 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:55.508631 62389 reflector.go:424] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:55 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:55.508668 62389 reflector.go:140] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io) Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.545958 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7c5a74004f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:CustomResourceDefinitionCreated,Message:Created CustomResourceDefinition.apiextensions.k8s.io/logicalvolumes.topolvm.cybozu.com because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:57.545904207 +0000 UTC m=+30.024869213,LastTimestamp:2022-11-15 05:38:57.545904207 +0000 UTC m=+30.024869213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.545989 62389 crd.go:166] Applied openshift CRD components/odf-lvm/topolvm.cybozu.com_logicalvolumes.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.546001 62389 openshift-crd-manager.go:46] openshift-crd-manager applied default CRDs Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.546007 62389 openshift-crd-manager.go:48] openshift-crd-manager waiting for CRDs acceptance before proceeding Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.546719 62389 crd.go:81] Waiting for crd crd/0000_03_securityinternal-openshift_02_rangeallocation.crd.yaml condition.type: established Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.547099 62389 core.go:170] Applying corev1 api core/namespace-openshift-kube-controller-manager.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.550743 62389 crd.go:81] Waiting for crd crd/0000_03_security-openshift_01_scc.crd.yaml condition.type: established Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.556937 62389 crd.go:81] Waiting for crd crd/route.crd.yaml condition.type: established Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.557446 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7c5b2352d8 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-kube-controller-manager because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:57.557394136 +0000 UTC m=+30.036359145,LastTimestamp:2022-11-15 05:38:57.557394136 +0000 UTC m=+30.036359145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.557466 62389 core.go:170] Applying corev1 api core/namespace-openshift-infra.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.561481 62389 crd.go:81] Waiting for crd components/odf-lvm/topolvm.cybozu.com_logicalvolumes.yaml condition.type: established Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.563339 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7c5b7d3abd dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-infra because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:57.563286205 +0000 UTC m=+30.042251211,LastTimestamp:2022-11-15 05:38:57.563286205 +0000 UTC m=+30.042251211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.563378 62389 controllermanager.go:191] Version: v1.24.0 Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.563390 62389 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.564469 62389 openshift-crd-manager.go:52] openshift-crd-manager all CRDs are ready Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-crd-manager I1115 05:38:57.564489 62389 manager.go:119] openshift-crd-manager completed Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.564545 62389 manager.go:114] Starting route-controller-manager Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.564743 62389 manager.go:114] Starting cluster-policy-controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.565283 62389 core.go:170] Applying corev1 api core/0000_50_cluster-openshift-route-controller-manager_00_namespace.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.565483 62389 manager.go:114] Starting openshift-default-scc-manager Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566442 62389 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566457 62389 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566470 62389 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566463 62389 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566489 62389 tlsconfig.go:240] "Starting DynamicServingCertificateController" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566561 62389 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566572 62389 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.566471 62389 secure_serving.go:210] Serving securely on 127.0.0.1:10257 Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.570906 62389 kube-controller-manager.go:113] kube-controller-manager is ready Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.571437 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7c5bf89d84 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-route-controller-manager because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:57.57137242 +0000 UTC m=+30.050337442,LastTimestamp:2022-11-15 05:38:57.57137242 +0000 UTC m=+30.050337442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.572257 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-anyuid.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.572813 62389 controller_manager.go:26] Starting controllers on 0.0.0.0:8445 (unknown) Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.573920 62389 standalone_apiserver.go:104] Started health checks at 0.0.0.0:8445 Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.573939 62389 leaderelection.go:248] attempting to acquire leader lease openshift-route-controller-manager/openshift-route-controllers... Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:57.574276 62389 controller.go:616] quota admission added evaluator for: serviceaccounts Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:38:57.578451 62389 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.580284 62389 leaderelection.go:258] successfully acquired lease openshift-route-controller-manager/openshift-route-controllers Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.580373 62389 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-route-controller-manager", Name:"openshift-route-controllers", UID:"6a929ea1-b24c-49f7-a43f-b4155f6939ab", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' release-ci-ci-op-k5cwk1pv-7cb14 became leader Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager W1115 05:38:57.580595 62389 route.go:78] "openshift.io/ingress-ip" is disabled Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.581849 62389 shared_informer.go:255] Waiting for caches to sync for tokens Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.583949 62389 policy_controller.go:88] Started "openshift.io/namespace-security-allocation" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller W1115 05:38:57.583970 62389 policy_controller.go:74] "openshift.io/resourcequota" is disabled Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller W1115 05:38:57.583979 62389 policy_controller.go:74] "openshift.io/cluster-quota-reconciliation" is disabled Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.584086 62389 base_controller.go:67] Waiting for caches to sync for namespace-security-allocation-controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.584145 62389 event.go:285] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.585436 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-hostaccess.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.589902 62389 ingress.go:262] ingress-to-route metrics registered with prometheus Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.589937 62389 route.go:91] Started "openshift.io/ingress-to-route" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.589948 62389 route.go:93] Started Route Controllers Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:38:57.590295 62389 ingress.go:313] Starting controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.597562 62389 policy_controller.go:88] Started "openshift.io/cluster-csr-approver" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.597743 62389 base_controller.go:67] Waiting for caches to sync for WebhookAuthenticatorCertApprover_csr-approver-controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.597754 62389 controllermanager.go:651] Started "cronjob" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.597874 62389 cronjob_controllerv2.go:135] "Starting cronjob controller v2" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.597922 62389 shared_informer.go:255] Waiting for caches to sync for cronjob Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.598006 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-hostmount-anyuid.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.609731 62389 controllermanager.go:651] Started "csrapproving" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager W1115 05:38:57.609764 62389 controllermanager.go:616] "tokencleaner" is disabled Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.610036 62389 certificate_controller.go:112] Starting certificate controller "csrapproving" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.610057 62389 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.610628 62389 policy_controller.go:88] Started "openshift.io/podsecurity-admission-label-syncer" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.610658 62389 policy_controller.go:91] Started Origin Controllers Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.610960 62389 base_controller.go:67] Waiting for caches to sync for pod-security-admission-label-synchronization-controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.611267 62389 event.go:285] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-kube-controller-manager", Name:"openshift-kube-controller-manager", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.611756 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-hostnetwork-v2.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625020 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:persistent-volume-binder" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625056 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:statefulset-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625114 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-dns" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625137 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625159 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:disruption-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625200 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:expand-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625245 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:namespace-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625264 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:job-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625283 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replication-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625294 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:certificate-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625323 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625339 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-controller-manager" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625363 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-admin" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625376 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:clusterrole-aggregation-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625400 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:generic-garbage-collector" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625416 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-ca-cert-publisher" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625439 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:discovery" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625457 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:daemon-set-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625475 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpoint-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625504 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:resourcequota-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625526 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:root-ca-cert-publisher" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625538 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625565 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:node-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625581 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-after-finished-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625606 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:monitoring" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625621 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pod-garbage-collector" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625652 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:route-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625666 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-account-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625694 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:pv-protection-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625712 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:horizontal-pod-autoscaler" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625733 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:service-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625749 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "cluster-admin" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625772 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625787 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:cronjob-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625810 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:deployment-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625821 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ephemeral-volume-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.625904 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node-proxier" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626129 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:attachdetach-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626194 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:endpointslice-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626289 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:ttl-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626354 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626407 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626441 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:node" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.626527 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:controller:replicaset-controller" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.630472 62389 controllermanager.go:651] Started "pv-protection" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.630769 62389 pv_protection_controller.go:79] Starting PV protection controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.630785 62389 shared_informer.go:255] Waiting for caches to sync for PV protection Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.630809 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-hostnetwork.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.647213 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-nonroot-v2.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665190 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665272 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665294 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665322 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665364 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665389 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager W1115 05:38:57.665404 62389 shared_informer.go:533] resyncPeriod 14h10m54.601088482s is smaller than resyncCheckPeriod 19h7m22.805130426s and the informer has already started. Changing it to 19h7m22.805130426s Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665475 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665514 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665540 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665561 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665607 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665626 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for jobs.batch Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665650 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665673 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665697 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665726 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for routes.route.openshift.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665760 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665781 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665814 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665866 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for controllerrevisions.apps Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665896 62389 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.665913 62389 controllermanager.go:651] Started "resourcequota" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.666109 62389 resource_quota_controller.go:277] Starting resource quota controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.666120 62389 shared_informer.go:255] Waiting for caches to sync for resource quota Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.666249 62389 resource_quota_monitor.go:295] QuotaMonitor running Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.666659 62389 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.666749 62389 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.668266 62389 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.673023 62389 controllermanager.go:651] Started "serviceaccount" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.673173 62389 serviceaccounts_controller.go:117] Starting service account controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.673195 62389 shared_informer.go:255] Waiting for caches to sync for service account Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.681161 62389 controllermanager.go:651] Started "disruption" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.681322 62389 disruption.go:421] Sending events to api server. Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.681364 62389 disruption.go:432] Starting disruption controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.681373 62389 shared_informer.go:255] Waiting for caches to sync for disruption Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.682589 62389 shared_informer.go:262] Caches are synced for tokens Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.687586 62389 controllermanager.go:651] Started "persistentvolume-binder" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.687816 62389 pv_controller_base.go:335] Starting persistent volume controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.687836 62389 shared_informer.go:255] Waiting for caches to sync for persistent volume Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.694545 62389 controllermanager.go:651] Started "persistentvolume-expander" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.694644 62389 expand_controller.go:340] Starting expand controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.694664 62389 shared_informer.go:255] Waiting for caches to sync for expand Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.698878 62389 base_controller.go:73] Caches are synced for WebhookAuthenticatorCertApprover_csr-approver-controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.698901 62389 base_controller.go:110] Starting #1 worker of WebhookAuthenticatorCertApprover_csr-approver-controller controller ... Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.702140 62389 controllermanager.go:651] Started "endpointslice" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.702470 62389 endpointslice_controller.go:261] Starting endpoint slice controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.702481 62389 shared_informer.go:255] Waiting for caches to sync for endpoint_slice Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772051 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772087 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772105 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772116 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772132 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772141 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772156 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772164 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772178 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772192 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "openshift-infra" should be enqueued: namespace "openshift-infra" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772207 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772216 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.772230 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:57.785245 62389 podsecurity_label_sync_controller.go:278] failed to determine whether namespace "kube-system" should be enqueued: namespace "kube-system" not found Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.884821 62389 garbagecollector.go:154] Starting garbage collector controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.884846 62389 shared_informer.go:255] Waiting for caches to sync for garbage collector Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.884824 62389 controllermanager.go:651] Started "garbagecollector" Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:57.884904 62389 graph_builder.go:291] GraphBuilder running Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:57.977514 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-nonroot.yaml Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.984138 62389 base_controller.go:73] Caches are synced for namespace-security-allocation-controller Nov 15 05:38:57 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:57.984162 62389 base_controller.go:110] Starting #1 worker of namespace-security-allocation-controller controller ... Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:58.011982 62389 base_controller.go:73] Caches are synced for pod-security-admission-label-synchronization-controller Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:38:58.012022 62389 base_controller.go:110] Starting #1 worker of pod-security-admission-label-synchronization-controller controller ... Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-controller-manager I1115 05:38:58.135933 62389 node_ipam_controller.go:91] Sending events to api server. Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver W1115 05:38:58.145885 62389 reflector.go:424] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver E1115 05:38:58.145922 62389 reflector.go:140] github.com/openshift/client-go/user/informers/externalversions/factory.go:101: Failed to watch *v1.Group: failed to list *v1.Group: the server could not find the requested resource (get groups.user.openshift.io) Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller E1115 05:38:58.375873 62389 namespace_scc_allocation_controller.go:258] rangeallocations.security.internal.openshift.io "scc-uid" is forbidden: User "system:serviceaccount:openshift-infra:namespace-security-allocation-controller" cannot get resource "rangeallocations" in API group "security.internal.openshift.io" at the cluster scope Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:58.377619 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-privileged.yaml Nov 15 05:38:58 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:58.778835 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-restricted-v2.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.178215 62389 scc.go:87] Applying scc api scc/0000_20_kube-apiserver-operator_00_scc-restricted.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.579624 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-anyuid.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.585115 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7cd3feda5f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:anyuid because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:59.585047135 +0000 UTC m=+32.064012141,LastTimestamp:2022-11-15 05:38:59.585047135 +0000 UTC m=+32.064012141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.585141 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-hostaccess.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.589584 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7cd44309f1 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:hostaccess because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:59.589515761 +0000 UTC m=+32.068480769,LastTimestamp:2022-11-15 05:38:59.589515761 +0000 UTC m=+32.068480769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.589611 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-hostmount-anyuid.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.593947 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7cd485d385 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:hostmount because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:59.593892741 +0000 UTC m=+32.072857746,LastTimestamp:2022-11-15 05:38:59.593892741 +0000 UTC m=+32.072857746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.593969 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork-v2.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.598973 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7cd4d292b1 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork-v2 because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:59.598922417 +0000 UTC m=+32.077887423,LastTimestamp:2022-11-15 05:38:59.598922417 +0000 UTC m=+32.077887423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.598997 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-hostnetwork.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.604019 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7cd51f833a dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:hostnetwork because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:59.60396473 +0000 UTC m=+32.082929711,LastTimestamp:2022-11-15 05:38:59.60396473 +0000 UTC m=+32.082929711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.604043 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-nonroot-v2.yaml Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.983922 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7cebc44509 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:nonroot-v2 because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:38:59.983861001 +0000 UTC m=+32.462826011,LastTimestamp:2022-11-15 05:38:59.983861001 +0000 UTC m=+32.462826011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:38:59 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:38:59.983950 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-nonroot.yaml Nov 15 05:39:00 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:00.383745 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d03991937 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:nonroot because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:00.383684919 +0000 UTC m=+32.862649925,LastTimestamp:2022-11-15 05:39:00.383684919 +0000 UTC m=+32.862649925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:00 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:00.383775 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-privileged.yaml Nov 15 05:39:00 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:00.783973 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d1b742b9f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:privileged because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:00.783917983 +0000 UTC m=+33.262882989,LastTimestamp:2022-11-15 05:39:00.783917983 +0000 UTC m=+33.262882989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:00 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:00.784003 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-restricted-v2.yaml Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.184300 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d3350790f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:01.184231695 +0000 UTC m=+33.663196716,LastTimestamp:2022-11-15 05:39:01.184231695 +0000 UTC m=+33.663196716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.184331 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_cr-scc-restricted.yaml Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.583875 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d4b21ba28 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:scc:restricted because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:01.583821352 +0000 UTC m=+34.062786357,LastTimestamp:2022-11-15 05:39:01.583821352 +0000 UTC m=+34.062786357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.584690 62389 rbac.go:144] Applying rbac scc/0000_20_kube-apiserver-operator_00_crb-systemauthenticated-scc-restricted-v2.yaml Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.589556 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d4b782a2a dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:scc:restricted-v2 because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:01.589486122 +0000 UTC m=+34.068451127,LastTimestamp:2022-11-15 05:39:01.589486122 +0000 UTC m=+34.068451127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.589583 62389 openshift-default-scc-manager.go:50] openshift-default-scc-manager applied default SCCs Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: openshift-default-scc-manager I1115 05:39:01.589592 62389 manager.go:119] openshift-default-scc-manager completed Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.589618 62389 manager.go:114] Starting microshift-mdns-controller Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.589775 62389 controller.go:67] mDNS: Starting server on interface "lo", NodeIP "10.0.0.2", NodeName "release-ci-ci-op-k5cwk1pv-7cb14" Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.590271 62389 controller.go:67] mDNS: Starting server on interface "eth0", NodeIP "10.0.0.2", NodeName "release-ci-ci-op-k5cwk1pv-7cb14" Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.590354 62389 controller.go:67] mDNS: Starting server on interface "br-ex", NodeIP "10.0.0.2", NodeName "release-ci-ci-op-k5cwk1pv-7cb14" Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.590530 62389 routes.go:30] Starting MicroShift mDNS route watcher Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.591216 62389 routes.go:73] mDNS: waiting for route API to be ready Nov 15 05:39:01 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: microshift-mdns-controller I1115 05:39:01.592708 62389 routes.go:87] mDNS: Route API ready, watching routers Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: route-controller-manager I1115 05:39:02.567257 62389 openshift-route-controller-manager.go:105] route-controller-manager is ready Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: version-manager I1115 05:39:02.567421 62389 manager.go:114] Starting version-manager Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.567443 62389 manager.go:114] Starting kubelet Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.567609 62389 manager.go:114] Starting infrastructure-services-manager Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kustomizer I1115 05:39:02.567430 62389 manager.go:114] Starting kustomizer Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kustomizer I1115 05:39:02.568403 62389 apply.go:64] No kustomization found at /usr/lib/microshift/manifests/kustomization.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kustomizer I1115 05:39:02.568421 62389 apply.go:64] No kustomization found at /etc/microshift/manifests/kustomization.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kustomizer I1115 05:39:02.568431 62389 manager.go:119] kustomizer completed Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.568853 62389 rbac.go:144] Applying rbac core/csr_approver_clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.572158 62389 server.go:413] "Kubelet version" kubeletVersion="v1.24.0" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.572176 62389 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet W1115 05:39:02.572231 62389 feature_gate.go:238] Setting GA feature gate PodSecurity=true. It will be removed in a future release. Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet W1115 05:39:02.572315 62389 feature_gate.go:238] Setting GA feature gate PodSecurity=true. It will be removed in a future release. Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: version-manager I1115 05:39:02.573241 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d861a523a dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/microshift-version -n kube-public because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.573191738 +0000 UTC m=+35.052156742,LastTimestamp:2022-11-15 05:39:02.573191738 +0000 UTC m=+35.052156742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: version-manager I1115 05:39:02.573259 62389 manager.go:119] version-manager completed Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.574428 62389 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/microshift/certs/ca-bundle/kubelet-ca.crt" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.574531 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d862dbcfd dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.574464253 +0000 UTC m=+35.053429270,LastTimestamp:2022-11-15 05:39:02.574464253 +0000 UTC m=+35.053429270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.574556 62389 rbac.go:144] Applying rbac core/namespace-security-allocation-controller-clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.578999 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8671f438 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.57893484 +0000 UTC m=+35.057899857,LastTimestamp:2022-11-15 05:39:02.57893484 +0000 UTC m=+35.057899857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.579029 62389 rbac.go:144] Applying rbac core/podsecurity-admission-label-syncer-controller-clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.581482 62389 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.581829 62389 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.581906 62389 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/system.slice/crio.service SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.581926 62389 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.581937 62389 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.582020 62389 state_mem.go:36] "Initialized new in-memory state store" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.585384 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d86d396a2 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.58533341 +0000 UTC m=+35.064298416,LastTimestamp:2022-11-15 05:39:02.58533341 +0000 UTC m=+35.064298416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.586124 62389 kubelet.go:393] "Attempting to sync node with API server" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.586157 62389 kubelet.go:293] "Adding apiserver pod source" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.586165 62389 rbac.go:144] Applying rbac core/csr_approver_clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.586177 62389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.587085 62389 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="cri-o" version="1.25.1-2.rhaos4.12.gitafa0c57.el8" apiVersion="v1" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet W1115 05:39:02.590254 62389 probe.go:268] Flexvolume plugin directory at /var/lib/microshift/kubelet-plugins/volume/exec does not exist. Recreating. Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.591125 62389 server.go:1175] "Started kubelet" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:02.591141 62389 kubelet.go:1333] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.591483 62389 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.592520 62389 server.go:438] "Adding debug handlers to kubelet server" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.592834 62389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:02.594463 62389 kubelet.go:2396] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.595432 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d876cab06 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.595365638 +0000 UTC m=+35.074330653,LastTimestamp:2022-11-15 05:39:02.595365638 +0000 UTC m=+35.074330653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.595455 62389 rbac.go:144] Applying rbac core/namespace-security-allocation-controller-clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.596254 62389 volume_manager.go:293] "Starting Kubelet Volume Manager" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.596471 62389 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:02.601075 62389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"release-ci-ci-op-k5cwk1pv-7cb14\" not found" node="release-ci-ci-op-k5cwk1pv-7cb14" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.602442 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d87d7be97 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:namespace-security-allocation-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.602382999 +0000 UTC m=+35.081348014,LastTimestamp:2022-11-15 05:39:02.602382999 +0000 UTC m=+35.081348014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.602459 62389 rbac.go:144] Applying rbac core/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.607083 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d881ec030 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:podsecurity-admission-label-syncer-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.607036464 +0000 UTC m=+35.086001473,LastTimestamp:2022-11-15 05:39:02.607036464 +0000 UTC m=+35.086001473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.608110 62389 scheduling.go:77] Applying PriorityClass CR core/priority-class-openshift-user-critical.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.614709 62389 core.go:170] Applying corev1 api components/service-ca/ns.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.619884 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d88e1e018 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.619824152 +0000 UTC m=+35.098789160,LastTimestamp:2022-11-15 05:39:02.619824152 +0000 UTC m=+35.098789160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.620661 62389 rbac.go:144] Applying rbac components/service-ca/clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.624359 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d89263cf5 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.624304373 +0000 UTC m=+35.103269378,LastTimestamp:2022-11-15 05:39:02.624304373 +0000 UTC m=+35.103269378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:39:02.624361 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve clusterrole from role ref: clusterrole.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.625124 62389 rbac.go:144] Applying rbac components/service-ca/clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.629694 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8977ab67 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.629641063 +0000 UTC m=+35.108606355,LastTimestamp:2022-11-15 05:39:02.629641063 +0000 UTC m=+35.108606355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.630434 62389 rbac.go:144] Applying rbac components/service-ca/rolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: cluster-policy-controller I1115 05:39:02.634720 62389 sccrolecache.go:460] failed to retrieve a role for a rolebinding ref: couldn't retrieve role from role ref: role.rbac.authorization.k8s.io "system:openshift:controller:service-ca" not found Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.634786 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d89c55e9d dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleBindingCreated,Message:Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.634733213 +0000 UTC m=+35.113698218,LastTimestamp:2022-11-15 05:39:02.634733213 +0000 UTC m=+35.113698218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.635597 62389 rbac.go:144] Applying rbac components/service-ca/role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.643404 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8a48f03f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleCreated,Message:Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.643355711 +0000 UTC m=+35.122320749,LastTimestamp:2022-11-15 05:39:02.643355711 +0000 UTC m=+35.122320749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.644151 62389 core.go:170] Applying corev1 api components/service-ca/sa.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.647996 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8a8efee3 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/service-ca -n openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.647946979 +0000 UTC m=+35.126911991,LastTimestamp:2022-11-15 05:39:02.647946979 +0000 UTC m=+35.126911991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.657326 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8b1d4e72 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:SecretCreated,Message:Created Secret/signing-key -n openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.657273458 +0000 UTC m=+35.136238465,LastTimestamp:2022-11-15 05:39:02.657273458 +0000 UTC m=+35.136238465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.663165 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8b762124 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.663094564 +0000 UTC m=+35.142059583,LastTimestamp:2022-11-15 05:39:02.663094564 +0000 UTC m=+35.142059583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.664350 62389 apps.go:94] Applying apps api components/service-ca/deployment.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:39:02.674429 62389 controller.go:616] quota admission added evaluator for: deployments.apps Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.676531 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8c41f87b dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentCreated,Message:Created Deployment.apps/service-ca -n openshift-service-ca because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.676453499 +0000 UTC m=+35.155418508,LastTimestamp:2022-11-15 05:39:02.676453499 +0000 UTC m=+35.155418508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.676578 62389 lvmd.go:62] lvmd file not found, assuming default values Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.677901 62389 storage.go:69] Applying sc components/odf-lvm/topolvm_default-storage-class.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.683098 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8ca677f3 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:StorageClassCreated,Message:Created StorageClass.storage.k8s.io/topolvm-provisioner because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.683039731 +0000 UTC m=+35.162004740,LastTimestamp:2022-11-15 05:39:02.683039731 +0000 UTC m=+35.162004740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.683876 62389 storage.go:126] Applying csiDriver components/odf-lvm/csi-driver.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.688864 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8cfe9653 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:CSIDriverCreated,Message:Created CSIDriver.storage.k8s.io/topolvm.cybozu.com because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.688814675 +0000 UTC m=+35.167779755,LastTimestamp:2022-11-15 05:39:02.688814675 +0000 UTC m=+35.167779755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.689035 62389 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.689052 62389 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.689072 62389 state_mem.go:36] "Initialized new in-memory state store" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.689532 62389 core.go:170] Applying corev1 api components/odf-lvm/topolvm-openshift-storage_namespace.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.690664 62389 policy_none.go:49] "None policy: Start" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.691305 62389 memory_manager.go:168] "Starting memorymanager" policy="None" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.691336 62389 state_mem.go:35] "Initializing new in-memory state store" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.695164 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8d5e93fc dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.695105532 +0000 UTC m=+35.174070543,LastTimestamp:2022-11-15 05:39:02.695105532 +0000 UTC m=+35.174070543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.696433 62389 core.go:170] Applying corev1 api components/odf-lvm/topolvm-node_v1_serviceaccount.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:02.696647 62389 kubelet.go:2471] "Error getting node" err="node \"release-ci-ci-op-k5cwk1pv-7cb14\" not found" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.697307 62389 kubelet_node_status.go:72] "Attempting to register node" node="release-ci-ci-op-k5cwk1pv-7cb14" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.700734 62389 kubelet_node_status.go:75] "Successfully registered node" node="release-ci-ci-op-k5cwk1pv-7cb14" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.701726 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8dc29638 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/topolvm-node -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.701659704 +0000 UTC m=+35.180624713,LastTimestamp:2022-11-15 05:39:02.701659704 +0000 UTC m=+35.180624713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.701750 62389 core.go:170] Applying corev1 api components/odf-lvm/topolvm-controller_v1_serviceaccount.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.707608 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8e1c558f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/topolvm-controller -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.707541391 +0000 UTC m=+35.186506400,LastTimestamp:2022-11-15 05:39:02.707541391 +0000 UTC m=+35.186506400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.708945 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.713352 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8e744137 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleCreated,Message:Created Role.rbac.authorization.k8s.io/topolvm-controller -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.713303351 +0000 UTC m=+35.192268357,LastTimestamp:2022-11-15 05:39:02.713303351 +0000 UTC m=+35.192268357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.713374 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.715902 62389 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.717831 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8eb89757 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleCreated,Message:Created Role.rbac.authorization.k8s.io/topolvm-csi-provisioner -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.717781847 +0000 UTC m=+35.196746843,LastTimestamp:2022-11-15 05:39:02.717781847 +0000 UTC m=+35.196746843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.717847 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.724186 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8f1970ae dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleCreated,Message:Created Role.rbac.authorization.k8s.io/topolvm-csi-resizer -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.724128942 +0000 UTC m=+35.203093954,LastTimestamp:2022-11-15 05:39:02.724128942 +0000 UTC m=+35.203093954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.725099 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_rolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.729383 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8f68e58b dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleBindingCreated,Message:Created RoleBinding.rbac.authorization.k8s.io/topolvm-controller -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.729336203 +0000 UTC m=+35.208301208,LastTimestamp:2022-11-15 05:39:02.729336203 +0000 UTC m=+35.208301208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.729404 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_rolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.733607 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8fa924cd dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleBindingCreated,Message:Created RoleBinding.rbac.authorization.k8s.io/topolvm-csi-provisioner -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.733546701 +0000 UTC m=+35.212511716,LastTimestamp:2022-11-15 05:39:02.733546701 +0000 UTC m=+35.212511716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.733633 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_rolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.738162 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d8feeb57f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleBindingCreated,Message:Created RoleBinding.rbac.authorization.k8s.io/topolvm-csi-resizer -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.738105727 +0000 UTC m=+35.217070741,LastTimestamp:2022-11-15 05:39:02.738105727 +0000 UTC m=+35.217070741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.739164 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.744104 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d90497c40 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/topolvm-csi-provisioner because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.744054848 +0000 UTC m=+35.223019849,LastTimestamp:2022-11-15 05:39:02.744054848 +0000 UTC m=+35.223019849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.744124 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.749470 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d909b587f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/topolvm-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.749419647 +0000 UTC m=+35.228384629,LastTimestamp:2022-11-15 05:39:02.749419647 +0000 UTC m=+35.228384629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.749508 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.753889 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d90ded405 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/topolvm-csi-resizer because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.753842181 +0000 UTC m=+35.232807185,LastTimestamp:2022-11-15 05:39:02.753842181 +0000 UTC m=+35.232807185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.753913 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node-scc_rbac.authorization.k8s.io_v1_clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.760138 62389 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.760163 62389 status_manager.go:161] "Starting to sync pod status with apiserver" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:02.760181 62389 kubelet.go:2033] "Starting kubelet main sync loop" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.760201 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d913eeb05 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/topolvm-node-scc because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.760139525 +0000 UTC m=+35.239104541,LastTimestamp:2022-11-15 05:39:02.760139525 +0000 UTC m=+35.239104541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.760219 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node_rbac.authorization.k8s.io_v1_clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:02.760224 62389 kubelet.go:2057] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.765203 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d918b769d dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/topolvm-node because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.765155997 +0000 UTC m=+35.244121134,LastTimestamp:2022-11-15 05:39:02.765155997 +0000 UTC m=+35.244121134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.765978 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-controller_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.769937 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d91d3b123 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/topolvm-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.769889571 +0000 UTC m=+35.248854581,LastTimestamp:2022-11-15 05:39:02.769889571 +0000 UTC m=+35.248854581,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.769977 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-provisioner_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.773616 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d920bcf82 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/topolvm-csi-provisioner because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.773567362 +0000 UTC m=+35.252532372,LastTimestamp:2022-11-15 05:39:02.773567362 +0000 UTC m=+35.252532372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.773637 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-csi-resizer_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.777304 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d924422b2 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/topolvm-csi-resizer because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.777258674 +0000 UTC m=+35.256223678,LastTimestamp:2022-11-15 05:39:02.777258674 +0000 UTC m=+35.256223678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.777321 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node-scc_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.780806 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d92796033 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/topolvm-node-scc because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.780747827 +0000 UTC m=+35.259712837,LastTimestamp:2022-11-15 05:39:02.780747827 +0000 UTC m=+35.259712837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.780834 62389 rbac.go:144] Applying rbac components/odf-lvm/topolvm-node_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.784862 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d92b77d2b dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/topolvm-node because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.784818475 +0000 UTC m=+35.263783490,LastTimestamp:2022-11-15 05:39:02.784818475 +0000 UTC m=+35.263783490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.785564 62389 core.go:170] Applying corev1 api components/odf-lvm/topolvm-lvmd-config_configmap_v1.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.789832 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d93033e61 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/lvmd -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.789783137 +0000 UTC m=+35.268748142,LastTimestamp:2022-11-15 05:39:02.789783137 +0000 UTC m=+35.268748142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.790516 62389 apps.go:94] Applying apps api components/odf-lvm/topolvm-controller_deployment.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.797554 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9378cc2a dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentCreated,Message:Created Deployment.apps/topolvm-controller -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.797487146 +0000 UTC m=+35.276452150,LastTimestamp:2022-11-15 05:39:02.797487146 +0000 UTC m=+35.276452150,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.798246 62389 apps.go:94] Applying apps api components/odf-lvm/topolvm-node_daemonset.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:39:02.804360 62389 controller.go:616] quota admission added evaluator for: daemonsets.apps Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.806750 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d940557cb dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetCreated,Message:Created DaemonSet.apps/topolvm-node -n openshift-storage because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.806697931 +0000 UTC m=+35.285662916,LastTimestamp:2022-11-15 05:39:02.806697931 +0000 UTC m=+35.285662916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.807556 62389 scc.go:87] Applying scc api components/odf-lvm/topolvm-node-securitycontextconstraint.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.815035 62389 core.go:170] Applying corev1 api components/openshift-router/namespace.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.824128 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d950e7f4e dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-ingress because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.824075086 +0000 UTC m=+35.303040087,LastTimestamp:2022-11-15 05:39:02.824075086 +0000 UTC m=+35.303040087,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.825073 62389 rbac.go:144] Applying rbac components/openshift-router/cluster-role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.829919 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9566d999 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/openshift-ingress-router because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.829865369 +0000 UTC m=+35.308830374,LastTimestamp:2022-11-15 05:39:02.829865369 +0000 UTC m=+35.308830374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.829947 62389 rbac.go:144] Applying rbac components/openshift-router/ingress-to-route-controller-clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.834897 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d95b2ac43 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.834834499 +0000 UTC m=+35.313799513,LastTimestamp:2022-11-15 05:39:02.834834499 +0000 UTC m=+35.313799513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.835684 62389 rbac.go:144] Applying rbac components/openshift-router/cluster-role-binding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.839802 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d95fdaaea dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-ingress-router because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.839749354 +0000 UTC m=+35.318714364,LastTimestamp:2022-11-15 05:39:02.839749354 +0000 UTC m=+35.318714364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.839832 62389 rbac.go:144] Applying rbac components/openshift-router/ingress-to-route-controller-clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.843766 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d963a1eff dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.843711231 +0000 UTC m=+35.322676640,LastTimestamp:2022-11-15 05:39:02.843711231 +0000 UTC m=+35.322676640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.844450 62389 core.go:170] Applying corev1 api components/openshift-router/service-account.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.848699 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d96851ce4 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/router -n openshift-ingress because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.848625892 +0000 UTC m=+35.327590904,LastTimestamp:2022-11-15 05:39:02.848625892 +0000 UTC m=+35.327590904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.849471 62389 core.go:170] Applying corev1 api components/openshift-router/configmap.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.853821 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d96d39bf3 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/service-ca-bundle -n openshift-ingress because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.853770227 +0000 UTC m=+35.332735251,LastTimestamp:2022-11-15 05:39:02.853770227 +0000 UTC m=+35.332735251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.854626 62389 core.go:170] Applying corev1 api components/openshift-router/service-internal.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:02.860761 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:39:02.862034 62389 alloc.go:327] "allocated clusterIPs" service="openshift-ingress/router-internal-default" clusterIPs=map[IPv4:10.43.73.144] Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.862688 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d975aeb11 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceCreated,Message:Created Service/router-internal-default -n openshift-ingress because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.862637841 +0000 UTC m=+35.341602848,LastTimestamp:2022-11-15 05:39:02.862637841 +0000 UTC m=+35.341602848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.867891 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d97aa4792 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:SecretCreated,Message:Created Secret/router-certs-default -n openshift-ingress because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.867838866 +0000 UTC m=+35.346803877,LastTimestamp:2022-11-15 05:39:02.867838866 +0000 UTC m=+35.346803877,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.868696 62389 apps.go:94] Applying apps api components/openshift-router/deployment.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.875612 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d98201219 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DeploymentCreated,Message:Created Deployment.apps/router-default -n openshift-ingress because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.875558425 +0000 UTC m=+35.354523438,LastTimestamp:2022-11-15 05:39:02.875558425 +0000 UTC m=+35.354523438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.876350 62389 core.go:170] Applying corev1 api components/openshift-dns/dns/namespace.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.882334 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d98868621 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.882272801 +0000 UTC m=+35.361237811,LastTimestamp:2022-11-15 05:39:02.882272801 +0000 UTC m=+35.361237811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.883064 62389 core.go:170] Applying corev1 api components/openshift-dns/dns/service.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kube-apiserver I1115 05:39:02.889110 62389 alloc.go:327] "allocated clusterIPs" service="openshift-dns/dns-default" clusterIPs=map[IPv4:10.43.0.10] Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.889750 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d98f7e05f dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceCreated,Message:Created Service/dns-default -n openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.889701471 +0000 UTC m=+35.368666476,LastTimestamp:2022-11-15 05:39:02.889701471 +0000 UTC m=+35.368666476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.890476 62389 rbac.go:144] Applying rbac components/openshift-dns/dns/cluster-role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.894669 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9942ed56 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.89461999 +0000 UTC m=+35.373584995,LastTimestamp:2022-11-15 05:39:02.89461999 +0000 UTC m=+35.373584995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.895401 62389 rbac.go:144] Applying rbac components/openshift-dns/dns/cluster-role-binding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.899145 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d99870434 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.899082292 +0000 UTC m=+35.378047297,LastTimestamp:2022-11-15 05:39:02.899082292 +0000 UTC m=+35.378047297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.899970 62389 core.go:170] Applying corev1 api components/openshift-dns/dns/service-account.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.904103 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d99d29e48 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/dns -n openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.904036936 +0000 UTC m=+35.383001941,LastTimestamp:2022-11-15 05:39:02.904036936 +0000 UTC m=+35.383001941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.904130 62389 core.go:170] Applying corev1 api components/openshift-dns/node-resolver/service-account.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.908227 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9a11c3cc dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/node-resolver -n openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.908175308 +0000 UTC m=+35.387140289,LastTimestamp:2022-11-15 05:39:02.908175308 +0000 UTC m=+35.387140289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.908946 62389 core.go:170] Applying corev1 api components/openshift-dns/dns/configmap.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.913103 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9a5befe2 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/dns-default -n openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.913036258 +0000 UTC m=+35.392001274,LastTimestamp:2022-11-15 05:39:02.913036258 +0000 UTC m=+35.392001274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.913945 62389 apps.go:94] Applying apps api components/openshift-dns/dns/daemonset.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.920526 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9acd1a9b dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetCreated,Message:Created DaemonSet.apps/dns-default -n openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.920452763 +0000 UTC m=+35.399417774,LastTimestamp:2022-11-15 05:39:02.920452763 +0000 UTC m=+35.399417774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.920551 62389 apps.go:94] Applying apps api components/openshift-dns/node-resolver/daemonset.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.926318 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9b25d040 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetCreated,Message:Created DaemonSet.apps/node-resolver -n openshift-dns because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.926266432 +0000 UTC m=+35.405231491,LastTimestamp:2022-11-15 05:39:02.926266432 +0000 UTC m=+35.405231491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.926358 62389 ovn.go:58] OVNKubernetes config file not found, assuming default values Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.927111 62389 core.go:170] Applying corev1 api components/ovn/namespace.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.932872 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9b8993f2 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:NamespaceCreated,Message:Created Namespace/openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.932804594 +0000 UTC m=+35.411769600,LastTimestamp:2022-11-15 05:39:02.932804594 +0000 UTC m=+35.411769600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.933573 62389 core.go:170] Applying corev1 api components/ovn/node/serviceaccount.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.938332 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9bdce231 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/ovn-kubernetes-node -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.938264113 +0000 UTC m=+35.417229120,LastTimestamp:2022-11-15 05:39:02.938264113 +0000 UTC m=+35.417229120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.938357 62389 core.go:170] Applying corev1 api components/ovn/master/serviceaccount.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.942953 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9c239e75 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ServiceAccountCreated,Message:Created ServiceAccount/ovn-kubernetes-controller -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.942899829 +0000 UTC m=+35.421864834,LastTimestamp:2022-11-15 05:39:02.942899829 +0000 UTC m=+35.421864834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.943737 62389 rbac.go:144] Applying rbac components/ovn/role.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.948132 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9c729cbb dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleCreated,Message:Created Role.rbac.authorization.k8s.io/openshift-ovn-kubernetes-node -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.948076731 +0000 UTC m=+35.427041736,LastTimestamp:2022-11-15 05:39:02.948076731 +0000 UTC m=+35.427041736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.948901 62389 rbac.go:144] Applying rbac components/ovn/rolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.953254 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9cc0d43e dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:RoleBindingCreated,Message:Created RoleBinding.rbac.authorization.k8s.io/openshift-ovn-kubernetes-node -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.95320275 +0000 UTC m=+35.432167770,LastTimestamp:2022-11-15 05:39:02.95320275 +0000 UTC m=+35.432167770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.954124 62389 rbac.go:144] Applying rbac components/ovn/clusterrole.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.959653 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9d227621 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleCreated,Message:Created ClusterRole.rbac.authorization.k8s.io/openshift-ovn-kubernetes-node because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.959601185 +0000 UTC m=+35.438566245,LastTimestamp:2022-11-15 05:39:02.959601185 +0000 UTC m=+35.438566245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.960465 62389 rbac.go:144] Applying rbac components/ovn/clusterrolebinding.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.965132 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9d760e7a dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ClusterRoleBindingCreated,Message:Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-ovn-kubernetes-node because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.965079674 +0000 UTC m=+35.444044683,LastTimestamp:2022-11-15 05:39:02.965079674 +0000 UTC m=+35.444044683,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.965795 62389 core.go:170] Applying corev1 api components/ovn/configmap.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.969817 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9dbd86a6 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:ConfigMapCreated,Message:Created ConfigMap/ovnkube-config -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.969763494 +0000 UTC m=+35.448728500,LastTimestamp:2022-11-15 05:39:02.969763494 +0000 UTC m=+35.448728500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.970525 62389 apps.go:94] Applying apps api components/ovn/master/daemonset.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.980275 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9e5d25b1 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetCreated,Message:Created DaemonSet.apps/ovnkube-master -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.980224433 +0000 UTC m=+35.459189586,LastTimestamp:2022-11-15 05:39:02.980224433 +0000 UTC m=+35.459189586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.980308 62389 apps.go:94] Applying apps api components/ovn/node/daemonset.yaml Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.986561 62389 recorder_logging.go:44] &Event{ObjectMeta:{dummy.1727ab7d9ebc9e48 dummy 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:dummy,Name:dummy,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DaemonSetCreated,Message:Created DaemonSet.apps/ovnkube-node -n openshift-ovn-kubernetes because it was missing,Source:EventSource{Component:,Host:,},FirstTimestamp:2022-11-15 05:39:02.986481224 +0000 UTC m=+35.465446228,LastTimestamp:2022-11-15 05:39:02.986481224 +0000 UTC m=+35.465446228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:,ReportingInstance:,} Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.986590 62389 infra-services-controller.go:61] infrastructure-services-manager launched ocp componets Nov 15 05:39:02 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: infrastructure-services-manager I1115 05:39:02.986608 62389 manager.go:119] infrastructure-services-manager completed Nov 15 05:39:03 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:03.061198 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:03 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:03.461897 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:03 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:03.587207 62389 apiserver.go:52] "Watching apiserver" Nov 15 05:39:04 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:04.262749 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:05 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet E1115 05:39:05.863612 62389 kubelet.go:2057] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:07.569330 62389 kubelet.go:146] kubelet is ready Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? I1115 05:39:07.569405 62389 run.go:140] MicroShift is ready Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: ??? I1115 05:39:07.570047 62389 run.go:145] sent sd_notify readiness message Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 systemd[1]: Started MicroShift. Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:07.607105 62389 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 15 05:39:07 release-ci-ci-op-k5cwk1pv-7cb14 microshift[62389]: kubelet I1115 05:39:07.607539 62389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" +++ dirname ./kuttl-test.sh ++ readlink -f ./../ + ROOT=/home/rhel8user + KUTTL_VERSION=0.10.0 + KUTTL=/home/rhel8user/bin/kuttl ++ dirname /home/rhel8user/bin/kuttl + mkdir -p /home/rhel8user/bin + '[' -e /home/rhel8user/bin/kuttl ']' + curl -sSLo /home/rhel8user/bin/kuttl https://github.com/kudobuilder/kuttl/releases/download/v0.10.0/kubectl-kuttl_0.10.0_linux_x86_64 + chmod a+x /home/rhel8user/bin/kuttl + /home/rhel8user/bin/kuttl test --namespace test === RUN kuttl harness.go:457: starting setup harness.go:248: running tests using configured kubeconfig. harness.go:285: Successful connection to cluster at: https://127.0.0.1:6443 harness.go:353: running tests harness.go:74: going to run test suite with timeout of 30 seconds for each step harness.go:365: testsuite: ./e2e has 1 tests === RUN kuttl/harness === RUN kuttl/harness/microshift === PAUSE kuttl/harness/microshift === CONT kuttl/harness/microshift logger.go:42: 05:39:13 | microshift | Skipping creation of user-supplied namespace: test logger.go:42: 05:39:13 | microshift/15- | starting test step 15- logger.go:42: 05:39:38 | microshift/15- | test step completed 15- logger.go:42: 05:39:38 | microshift/20- | starting test step 20- logger.go:42: 05:40:18 | microshift/20- | test step completed 20- logger.go:42: 05:40:18 | microshift/25- | starting test step 25- logger.go:42: 05:40:18 | microshift/25- | test step completed 25- logger.go:42: 05:40:18 | microshift/30- | starting test step 30- logger.go:42: 05:48:39 | microshift/30- | test step failed 30- case.go:254: failed in step 30- case.go:256: --- DaemonSet:openshift-storage/topolvm-node +++ DaemonSet:openshift-storage/topolvm-node @@ -1,9 +1,440 @@ apiVersion: apps/v1 kind: DaemonSet metadata: + annotations: + deprecated.daemonset.template.generation: "1" + operator.openshift.io/spec-hash: e017efc72e82605317aa4af29577072d6763a0faf428ceff3e26e3d0d3e49356 + labels: + app: topolvm-node + managedFields: + - apiVersion: apps/v1 + fieldsType: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: {} + f:deprecated.daemonset.template.generation: {} + f:operator.openshift.io/spec-hash: {} + f:labels: + .: {} + f:app: {} + f:spec: + f:revisionHistoryLimit: {} + f:selector: {} + f:template: + f:metadata: + f:annotations: + .: {} + f:odf-lvm.microshift.io/lvmd_config_sha256sum: {} + f:labels: + .: {} + f:app: {} + f:name: {} + f:spec: + f:containers: + k:{"name":"csi-registrar"}: + .: {} + f:args: {} + f:image: {} + f:imagePullPolicy: {} + f:lifecycle: + .: {} + f:preStop: + .: {} + f:exec: + .: {} + f:command: {} + f:name: {} + f:resources: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:volumeMounts: + .: {} + k:{"mountPath":"/registration"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"mountPath":"/run/topolvm"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"name":"liveness-probe"}: + .: {} + f:args: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:resources: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:volumeMounts: + .: {} + k:{"mountPath":"/run/topolvm"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"name":"lvmd"}: + .: {} + f:command: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:resources: + .: {} + f:requests: + .: {} + f:cpu: {} + f:memory: {} + f:securityContext: + .: {} + f:privileged: {} + f:runAsUser: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:volumeMounts: + .: {} + k:{"mountPath":"/etc/topolvm"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"mountPath":"/run/lvmd"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"name":"topolvm-node"}: + .: {} + f:command: {} + f:env: + .: {} + k:{"name":"NODE_NAME"}: + .: {} + f:name: {} + f:valueFrom: + .: {} + f:fieldRef: {} + f:image: {} + f:imagePullPolicy: {} + f:livenessProbe: + .: {} + f:failureThreshold: {} + f:httpGet: + .: {} + f:path: {} + f:port: {} + f:scheme: {} + f:initialDelaySeconds: {} + f:periodSeconds: {} + f:successThreshold: {} + f:timeoutSeconds: {} + f:name: {} + f:ports: + .: {} + k:{"containerPort":9808,"protocol":"TCP"}: + .: {} + f:containerPort: {} + f:name: {} + f:protocol: {} + f:resources: + .: {} + f:requests: + .: {} + f:cpu: {} + f:memory: {} + f:securityContext: + .: {} + f:privileged: {} + f:runAsUser: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:volumeMounts: + .: {} + k:{"mountPath":"/run/lvmd"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"mountPath":"/run/topolvm"}: + .: {} + f:mountPath: {} + f:name: {} + k:{"mountPath":"/var/lib/kubelet/plugins/kubernetes.io/csi"}: + .: {} + f:mountPath: {} + f:mountPropagation: {} + f:name: {} + k:{"mountPath":"/var/lib/kubelet/pods"}: + .: {} + f:mountPath: {} + f:mountPropagation: {} + f:name: {} + f:dnsPolicy: {} + f:hostPID: {} + f:initContainers: + .: {} + k:{"name":"file-checker"}: + .: {} + f:command: {} + f:image: {} + f:imagePullPolicy: {} + f:name: {} + f:resources: {} + f:terminationMessagePath: {} + f:terminationMessagePolicy: {} + f:volumeMounts: + .: {} + k:{"mountPath":"/etc/topolvm"}: + .: {} + f:mountPath: {} + f:name: {} + f:priorityClassName: {} + f:restartPolicy: {} + f:schedulerName: {} + f:securityContext: {} + f:serviceAccount: {} + f:serviceAccountName: {} + f:terminationGracePeriodSeconds: {} + f:volumes: + .: {} + k:{"name":"csi-plugin-dir"}: + .: {} + f:hostPath: + .: {} + f:path: {} + f:type: {} + f:name: {} + k:{"name":"lvmd-config-dir"}: + .: {} + f:configMap: + .: {} + f:defaultMode: {} + f:items: {} + f:name: {} + f:name: {} + k:{"name":"lvmd-socket-dir"}: + .: {} + f:emptyDir: + .: {} + f:medium: {} + f:name: {} + k:{"name":"node-plugin-dir"}: + .: {} + f:hostPath: + .: {} + f:path: {} + f:type: {} + f:name: {} + k:{"name":"pod-volumes-dir"}: + .: {} + f:hostPath: + .: {} + f:path: {} + f:type: {} + f:name: {} + k:{"name":"registration-dir"}: + .: {} + f:hostPath: + .: {} + f:path: {} + f:type: {} + f:name: {} + f:updateStrategy: + f:rollingUpdate: + .: {} + f:maxSurge: {} + f:maxUnavailable: {} + f:type: {} + manager: microshift + operation: Update + time: "2022-11-15T05:39:02Z" + - apiVersion: apps/v1 + fieldsType: FieldsV1 + fieldsV1: + f:status: + f:currentNumberScheduled: {} + f:desiredNumberScheduled: {} + f:numberUnavailable: {} + f:observedGeneration: {} + f:updatedNumberScheduled: {} + manager: microshift + operation: Update + subresource: status + time: "2022-11-15T05:39:33Z" name: topolvm-node namespace: openshift-storage +spec: + revisionHistoryLimit: 10 + selector: + matchLabels: + app: topolvm-node + template: + metadata: + annotations: + odf-lvm.microshift.io/lvmd_config_sha256sum: cd811881ede06f69cba50cf9408e349ccf3edca76a9aec93d8cd35ba04e6033d + creationTimestamp: null + labels: + app: topolvm-node + name: lvmcluster-sample + spec: + containers: + - command: + - /lvmd + - --config=/etc/topolvm/lvmd.yaml + - --container=true + image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad + imagePullPolicy: IfNotPresent + name: lvmd + resources: + requests: + cpu: 250m + memory: 250Mi + securityContext: + privileged: true + runAsUser: 0 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /run/lvmd + name: lvmd-socket-dir + - mountPath: /etc/topolvm + name: lvmd-config-dir + - command: + - /topolvm-node + - --lvmd-socket=/run/lvmd/lvmd.socket + env: + - name: NODE_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 3 + httpGet: + path: /healthz + port: healthz + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 60 + successThreshold: 1 + timeoutSeconds: 3 + name: topolvm-node + ports: + - containerPort: 9808 + name: healthz + protocol: TCP + resources: + requests: + cpu: 250m + memory: 250Mi + securityContext: + privileged: true + runAsUser: 0 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /run/topolvm + name: node-plugin-dir + - mountPath: /run/lvmd + name: lvmd-socket-dir + - mountPath: /var/lib/kubelet/pods + mountPropagation: Bidirectional + name: pod-volumes-dir + - mountPath: /var/lib/kubelet/plugins/kubernetes.io/csi + mountPropagation: Bidirectional + name: csi-plugin-dir + - args: + - --csi-address=/run/topolvm/csi-topolvm.sock + - --kubelet-registration-path=/var/lib/kubelet/plugins/topolvm.cybozu.com/node/csi-topolvm.sock + image: registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb + imagePullPolicy: IfNotPresent + lifecycle: + preStop: + exec: + command: + - /bin/sh + - -c + - rm -rf /registration/topolvm.cybozu.com /registration/topolvm.cybozu.com-reg.sock + name: csi-registrar + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /run/topolvm + name: node-plugin-dir + - mountPath: /registration + name: registration-dir + - args: + - --csi-address=/run/topolvm/csi-topolvm.sock + image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e + imagePullPolicy: IfNotPresent + name: liveness-probe + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /run/topolvm + name: node-plugin-dir + dnsPolicy: ClusterFirst + hostPID: true + initContainers: + - command: + - /usr/bin/bash + - -c + - until [ -f /etc/topolvm/lvmd.yaml ]; do echo waiting for lvmd config file; sleep 5; done + image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b + imagePullPolicy: IfNotPresent + name: file-checker + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /etc/topolvm + name: lvmd-config-dir + priorityClassName: system-node-critical + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + serviceAccount: topolvm-node + serviceAccountName: topolvm-node + terminationGracePeriodSeconds: 30 + volumes: + - hostPath: + path: /var/lib/kubelet/plugins_registry/ + type: Directory + name: registration-dir + - hostPath: + path: /var/lib/kubelet/plugins/topolvm.cybozu.com/node + type: DirectoryOrCreate + name: node-plugin-dir + - hostPath: + path: /var/lib/kubelet/plugins/kubernetes.io/csi + type: DirectoryOrCreate + name: csi-plugin-dir + - hostPath: + path: /var/lib/kubelet/pods/ + type: DirectoryOrCreate + name: pod-volumes-dir + - configMap: + defaultMode: 420 + items: + - key: lvmd.yaml + path: lvmd.yaml + name: lvmd + name: lvmd-config-dir + - emptyDir: + medium: Memory + name: lvmd-socket-dir + updateStrategy: + rollingUpdate: + maxSurge: 0 + maxUnavailable: 1 + type: RollingUpdate status: - numberAvailable: 1 - numberReady: 1 + currentNumberScheduled: 1 + desiredNumberScheduled: 1 + numberMisscheduled: 0 + numberReady: 0 + numberUnavailable: 1 + observedGeneration: 1 + updatedNumberScheduled: 1 case.go:256: resource DaemonSet:openshift-storage/topolvm-node: .status.numberAvailable: key is missing from map logger.go:42: 05:48:39 | microshift | Failed to collect events for microshift in ns test: no matches for kind "Event" in version "events.k8s.io/v1beta1" logger.go:42: 05:48:39 | microshift | Skipping deletion of user-supplied namespace: test === CONT kuttl harness.go:399: run tests finished harness.go:508: cleaning up harness.go:563: removing temp folder: "" --- FAIL: kuttl (566.13s) --- FAIL: kuttl/harness (0.00s) --- FAIL: kuttl/harness/microshift (565.95s) FAIL === DEBUG INFORMATION === + echo + echo '=== DEBUG INFORMATION ===' + echo + kubectl get nodes NAME STATUS ROLES AGE VERSION release-ci-ci-op-k5cwk1pv-7cb14 Ready control-plane,master,worker 9m37s v1.24.0 + kubectl get nodes -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Node metadata: annotations: capacity.topolvm.cybozu.com/00default: "10733223936" capacity.topolvm.cybozu.com/default: "10733223936" csi.volume.kubernetes.io/nodeid: '{"topolvm.cybozu.com":"release-ci-ci-op-k5cwk1pv-7cb14"}' k8s.ovn.org/host-addresses: '["10.0.0.2"]' k8s.ovn.org/l3-gateway-config: '{"default":{"mode":"local","interface-id":"br-ex_release-ci-ci-op-k5cwk1pv-7cb14","mac-address":"42:01:0a:00:00:02","ip-addresses":["10.0.0.2/32"],"ip-address":"10.0.0.2/32","next-hops":["10.0.0.1"],"next-hop":"10.0.0.1","node-port-enable":"true","vlan-id":"0"}}' k8s.ovn.org/node-chassis-id: 77436c83-1258-484f-b8d8-ec91acb3c8f3 k8s.ovn.org/node-gateway-router-lrp-ifaddr: '{"ipv4":"100.64.0.2/16"}' k8s.ovn.org/node-mgmt-port-mac-address: 52:17:ad:e6:11:7d k8s.ovn.org/node-primary-ifaddr: '{"ipv4":"10.0.0.2/32"}' k8s.ovn.org/node-subnets: '{"default":"10.42.0.0/24"}' node.alpha.kubernetes.io/ttl: "0" volumes.kubernetes.io/controller-managed-attach-detach: "true" creationTimestamp: "2022-11-15T05:39:02Z" finalizers: - topolvm.cybozu.com/node labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/arch: amd64 kubernetes.io/hostname: release-ci-ci-op-k5cwk1pv-7cb14 kubernetes.io/os: linux node-role.kubernetes.io/control-plane: "" node-role.kubernetes.io/master: "" node-role.kubernetes.io/worker: "" topology.topolvm.cybozu.com/node: release-ci-ci-op-k5cwk1pv-7cb14 name: release-ci-ci-op-k5cwk1pv-7cb14 resourceVersion: "1161" uid: 27ab8ad7-1b77-4887-a5a0-c8c7d8e8dd8a spec: podCIDR: 10.42.0.0/24 podCIDRs: - 10.42.0.0/24 status: addresses: - address: 10.0.0.2 type: InternalIP - address: release-ci-ci-op-k5cwk1pv-7cb14 type: Hostname allocatable: cpu: "8" ephemeral-storage: "19127284500" hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 32520744Ki pods: "250" capacity: cpu: "8" ephemeral-storage: 20268Mi hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 32623144Ki pods: "250" conditions: - lastHeartbeatTime: "2022-11-15T05:45:09Z" lastTransitionTime: "2022-11-15T05:39:02Z" message: kubelet has sufficient memory available reason: KubeletHasSufficientMemory status: "False" type: MemoryPressure - lastHeartbeatTime: "2022-11-15T05:45:09Z" lastTransitionTime: "2022-11-15T05:39:02Z" message: kubelet has no disk pressure reason: KubeletHasNoDiskPressure status: "False" type: DiskPressure - lastHeartbeatTime: "2022-11-15T05:45:09Z" lastTransitionTime: "2022-11-15T05:39:02Z" message: kubelet has sufficient PID available reason: KubeletHasSufficientPID status: "False" type: PIDPressure - lastHeartbeatTime: "2022-11-15T05:45:09Z" lastTransitionTime: "2022-11-15T05:39:33Z" message: kubelet is posting ready status reason: KubeletReady status: "True" type: Ready daemonEndpoints: kubeletEndpoint: Port: 10250 images: - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 sizeBytes: 697979339 - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff sizeBytes: 465946351 - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7 sizeBytes: 414570760 - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7 sizeBytes: 411735201 - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840 sizeBytes: 403008245 - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347 sizeBytes: 391301603 - names: - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f2e139e96869aa0f807c42cc557b8f857de9cf749baa00c881f57b10679f77e2 sizeBytes: 335083353 - names: - registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2 sizeBytes: 330591433 - names: - registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898 sizeBytes: 327705301 - names: - registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb sizeBytes: 291248387 - names: - registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e sizeBytes: 289276449 - names: - registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad sizeBytes: 190593719 - names: - registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b sizeBytes: 40733352 nodeInfo: architecture: amd64 bootID: 0d536fb9-8b98-4be3-a258-9b1133f95da8 containerRuntimeVersion: cri-o://1.25.1-2.rhaos4.12.gitafa0c57.el8 kernelVersion: 4.18.0-372.16.1.el8_6.x86_64 kubeProxyVersion: v1.24.0 kubeletVersion: v1.24.0 machineID: 8a156cac7040234eb04eadcb0b8b51a1 operatingSystem: linux osImage: Red Hat Enterprise Linux 8.6 (Ootpa) systemUUID: 0c315a2e-a81f-de06-d39c-2648e1e68fb8 kind: List metadata: resourceVersion: "" + kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE openshift-dns dns-default-tw2xt 2/2 Running 0 9m6s openshift-dns node-resolver-jhcw4 1/1 Running 0 9m28s openshift-ingress router-default-76b7657c68-6xcfc 1/1 Running 0 9m29s openshift-ovn-kubernetes ovnkube-master-kdsb7 4/4 Running 0 9m28s openshift-ovn-kubernetes ovnkube-node-b5wd2 1/1 Running 0 9m28s openshift-service-ca service-ca-77fc4cc659-dp8dn 1/1 Running 0 9m29s openshift-storage topolvm-controller-8456864f89-vg42d 4/4 Running 0 9m29s openshift-storage topolvm-node-2tnh5 0/4 PodInitializing 0 9m6s + kubectl get pods -A -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.42.0.6/24"],"mac_address":"0a:58:0a:2a:00:06","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.6/24","gateway_ip":"10.42.0.1"}}' target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: "2022-11-15T05:39:33Z" generateName: dns-default- labels: controller-revision-hash: 85975f57cd dns.operator.openshift.io/daemonset-dns: default pod-template-generation: "1" name: dns-default-tw2xt namespace: openshift-dns ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: dns-default uid: 409c49c4-48e6-4c56-984b-ab89079c592b resourceVersion: "720" uid: b4bede0d-71c2-40dd-9ebe-3395e3ddf85e spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - release-ci-ci-op-k5cwk1pv-7cb14 containers: - args: - -conf - /etc/coredns/Corefile command: - coredns image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: dns ports: - containerPort: 5353 name: dns protocol: UDP - containerPort: 5353 name: dns-tcp protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 8181 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 3 successThreshold: 1 timeoutSeconds: 3 resources: requests: cpu: 50m memory: 70Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2bt8l readOnly: true - args: - --logtostderr - --secure-listen-address=:9154 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 - --upstream=http://127.0.0.1:9153/ - --tls-cert-file=/etc/tls/private/tls.crt - --tls-private-key-file=/etc/tls/private/tls.key image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347 imagePullPolicy: IfNotPresent name: kube-rbac-proxy ports: - containerPort: 9154 name: metrics protocol: TCP resources: requests: cpu: 10m memory: 40Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/tls/private name: metrics-tls readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-2bt8l readOnly: true dnsPolicy: Default enableServiceLinks: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: dns serviceAccountName: dns terminationGracePeriodSeconds: 30 tolerations: - key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists volumes: - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: dns-default name: config-volume - name: metrics-tls secret: defaultMode: 420 secretName: dns-default-metrics-tls - name: kube-api-access-2bt8l projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:56Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:56Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://31d008edcb11e830255afd1bef37f38cebd0797f00c80cda5b3228e3d34f0475 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840 lastState: {} name: dns ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:45Z" - containerID: cri-o://aa910d7f1e329bfbe5da330c4849413c959d4bc1dec6a197ab65d88c18ae7ef0 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347 lastState: {} name: kube-rbac-proxy ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:49Z" hostIP: 10.0.0.2 phase: Running podIP: 10.42.0.6 podIPs: - ip: 10.42.0.6 qosClass: Burstable startTime: "2022-11-15T05:39:33Z" - apiVersion: v1 kind: Pod metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: "2022-11-15T05:39:11Z" generateName: node-resolver- labels: controller-revision-hash: 6db77686b5 dns.operator.openshift.io/daemonset-node-resolver: "" pod-template-generation: "1" name: node-resolver-jhcw4 namespace: openshift-dns ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: node-resolver uid: 8861af62-8ab1-449f-a5d0-9a956db39d71 resourceVersion: "524" uid: ad9ff0e1-9e4e-4385-bed3-6435d577c18a spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - release-ci-ci-op-k5cwk1pv-7cb14 containers: - command: - /bin/bash - -c - | #!/bin/bash set -uo pipefail trap 'jobs -p | xargs kill || true; wait; exit 0' TERM NAMESERVER=${DNS_DEFAULT_SERVICE_HOST} OPENSHIFT_MARKER="openshift-generated-node-resolver" HOSTS_FILE="/etc/hosts" TEMP_FILE="/etc/hosts.tmp" IFS=', ' read -r -a services <<< "${SERVICES}" # Make a temporary file with the old hosts file's attributes. cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}" while true; do declare -A svc_ips for svc in "${services[@]}"; do # Fetch service IP from cluster dns if present. We make several tries # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones # are for deployments with Kuryr on older OpenStack (OSP13) - those do not # support UDP loadbalancers and require reaching DNS through TCP. cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') for i in ${!cmds[*]} do ips=($(eval "${cmds[i]}")) if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then svc_ips["${svc}"]="${ips[@]}" break fi done done # Update /etc/hosts only if we get valid service IPs # We will not update /etc/hosts when there is coredns service outage or api unavailability # Stale entries could exist in /etc/hosts if the service is deleted if [[ -n "${svc_ips[*]-}" ]]; then # Build a new hosts file from /etc/hosts with our custom entries filtered out grep -v "# ${OPENSHIFT_MARKER}" "${HOSTS_FILE}" > "${TEMP_FILE}" # Append resolver entries for services for svc in "${!svc_ips[@]}"; do for ip in ${svc_ips[${svc}]}; do echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" done done # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior # Replace /etc/hosts with our modified version if needed cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn fi sleep 60 & wait unset svc_ips done env: - name: SERVICES value: image-registry.openshift-image-registry.svc - name: NAMESERVER value: 172.30.0.10 - name: CLUSTER_DOMAIN value: cluster.local image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff imagePullPolicy: IfNotPresent name: dns-node-resolver resources: requests: cpu: 5m memory: 21Mi securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/hosts name: hosts-file - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-v2wfq readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostNetwork: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: node-resolver serviceAccountName: node-resolver terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /etc/hosts type: File name: hosts-file - name: kube-api-access-v2wfq projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:11Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:23Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:23Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:11Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://85cdd2d22091838b73c221cb976a422c4facea28fc12aae044cffd2e3f2748fe image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff lastState: {} name: dns-node-resolver ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:23Z" hostIP: 10.0.0.2 phase: Running podIP: 10.0.0.2 podIPs: - ip: 10.0.0.2 qosClass: Burstable startTime: "2022-11-15T05:39:11Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.42.0.4/24"],"mac_address":"0a:58:0a:2a:00:04","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.4/24","gateway_ip":"10.42.0.1"}}' openshift.io/scc: hostnetwork target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' unsupported.do-not-use.openshift.io/override-liveness-grace-period-seconds: "10" creationTimestamp: "2022-11-15T05:39:10Z" generateName: router-default-76b7657c68- labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 76b7657c68 name: router-default-76b7657c68-6xcfc namespace: openshift-ingress ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: router-default-76b7657c68 uid: c66f794a-cf64-46a6-a5dd-2b00fbcefb44 resourceVersion: "682" uid: 868ba04c-b1ea-438a-a89c-8e90befa7a1d spec: containers: - env: - name: ROUTER_SERVICE_NAMESPACE value: openshift-ingress - name: DEFAULT_CERTIFICATE_DIR value: /etc/pki/tls/private - name: DEFAULT_DESTINATION_CA_PATH value: /var/run/configmaps/service-ca/service-ca.crt - name: STATS_PORT value: "1936" - name: RELOAD_INTERVAL value: 5s - name: ROUTER_ALLOW_WILDCARD_ROUTES value: "false" - name: ROUTER_CANONICAL_HOSTNAME value: router-default.apps.cluster.local - name: ROUTER_CIPHERS value: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 - name: ROUTER_CIPHERSUITES value: TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 - name: ROUTER_DISABLE_HTTP2 value: "true" - name: ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK value: "false" - name: ROUTER_LOAD_BALANCE_ALGORITHM value: random - name: ROUTER_METRICS_TYPE value: haproxy - name: ROUTER_SERVICE_NAME value: default - name: ROUTER_SET_FORWARDED_HEADERS value: append - name: ROUTER_TCP_BALANCE_SCHEME value: source - name: ROUTER_THREADS value: "4" - name: ROUTER_USE_PROXY_PROTOCOL value: "true" - name: SSL_MIN_VERSION value: TLSv1.2 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 1936 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: router ports: - containerPort: 80 hostPort: 80 name: http protocol: TCP - containerPort: 443 hostPort: 443 name: https protocol: TCP - containerPort: 1936 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz/ready port: 1936 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 100m memory: 256Mi securityContext: allowPrivilegeEscalation: true capabilities: drop: - KILL - MKNOD - SETGID - SETUID runAsNonRoot: true runAsUser: 1000090000 startupProbe: failureThreshold: 120 httpGet: path: /healthz/ready port: 1936 scheme: HTTP periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/pki/tls/private name: default-certificate readOnly: true - mountPath: /var/run/configmaps/service-ca name: service-ca-bundle readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-5fk5k readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: "" preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000090000 seLinuxOptions: level: s0:c10,c0 supplementalGroups: - 1000090000 serviceAccount: router serviceAccountName: router terminationGracePeriodSeconds: 3600 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: default-certificate secret: defaultMode: 420 secretName: router-certs-default - configMap: defaultMode: 420 items: - key: service-ca.crt path: service-ca.crt name: service-ca-bundle optional: false name: service-ca-bundle - name: kube-api-access-5fk5k projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:47Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:47Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://ed4a629a259f8e085d11c43c63fcd10eb44f947f8f8a3fbb04f4b0d2c26330b4 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7 lastState: {} name: router ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:45Z" hostIP: 10.0.0.2 phase: Running podIP: 10.42.0.4 podIPs: - ip: 10.42.0.4 qosClass: Burstable startTime: "2022-11-15T05:39:33Z" - apiVersion: v1 kind: Pod metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: "2022-11-15T05:39:11Z" generateName: ovnkube-master- labels: app: ovnkube-master component: network controller-revision-hash: 54875d4d5c kubernetes.io/os: linux openshift.io/component: network ovn-db-pod: "true" pod-template-generation: "1" type: infra name: ovnkube-master-kdsb7 namespace: openshift-ovn-kubernetes ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: ovnkube-master uid: 3cda4401-ae2c-443e-8df5-ffbbc4ec1a05 resourceVersion: "556" uid: 2bfbc0df-23d0-4589-aa8f-a6ff9e236c29 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - release-ci-ci-op-k5cwk1pv-7cb14 containers: - command: - /bin/bash - -c - | set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping ovn-northd" OVN_MANAGE_OVSDB=no /usr/share/ovn/scripts/ovn-ctl stop_northd echo "$(date -Iseconds) - ovn-northd stopped" rm -f /var/run/ovn/ovn-northd.pid exit 0 } # end of quit trap quit TERM INT echo "$(date -Iseconds) - starting ovn-northd" exec ovn-northd \ --no-chdir "-vconsole:${OVN_LOG_LEVEL}" -vfile:off "-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ --pidfile /var/run/ovn/ovn-northd.pid & wait $! env: - name: OVN_LOG_LEVEL value: info image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /bin/bash - -c - OVN_MANAGE_OVSDB=no /usr/share/ovn/scripts/ovn-ctl stop_northd name: northd resources: requests: cpu: 10m memory: 10Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /run/openvswitch/ name: run-openvswitch - mountPath: /run/ovn/ name: run-ovn - mountPath: /env name: env-overrides - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-t7qn5 readOnly: true - command: - /bin/bash - -c - | set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping nbdb" /usr/share/ovn/scripts/ovn-ctl stop_nb_ovsdb echo "$(date -Iseconds) - nbdb stopped" rm -f /var/run/ovn/ovnnb_db.pid exit 0 } # end of quit trap quit TERM INT bracketify() { case "$1" in *:*) echo "[$1]" ;; *) echo "$1" ;; esac } compact() { sleep 15 while true; do /usr/bin/ovn-appctl -t /var/run/ovn/ovn${1}_db.ctl --timeout=5 ovsdb-server/compact 2>/dev/null || true sleep 600 done } # initialize variables db="nb" ovn_db_file="/etc/ovn/ovn${db}_db.db" OVN_ARGS="--db-nb-cluster-local-port=9643 --no-monitor" echo "$(date -Iseconds) - starting nbdb" exec /usr/share/ovn/scripts/ovn-ctl \ ${OVN_ARGS} \ --ovn-nb-log="-vconsole:${OVN_LOG_LEVEL} -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ run_nb_ovsdb & db_pid=$! compact $db & wait $db_pid env: - name: OVN_LOG_LEVEL value: info - name: OVN_NORTHD_PROBE_INTERVAL value: "5000" image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: - /bin/bash - -c - | set -x rm -f /var/run/ovn/ovnnb_db.pid #configure northd_probe_interval northd_probe_interval=${OVN_NORTHD_PROBE_INTERVAL:-10000} echo "Setting northd probe interval to ${northd_probe_interval} ms" retries=0 current_probe_interval=0 while [[ "${retries}" -lt 10 ]]; do current_probe_interval=$(ovn-nbctl --if-exists get NB_GLOBAL . options:northd_probe_interval) if [[ $? == 0 ]]; then current_probe_interval=$(echo ${current_probe_interval} | tr -d '\"') break else sleep 2 (( retries += 1 )) fi done if [[ "${current_probe_interval}" != "${northd_probe_interval}" ]]; then retries=0 while [[ "${retries}" -lt 10 ]]; do ovn-nbctl set NB_GLOBAL . options:northd_probe_interval=${northd_probe_interval} if [[ $? != 0 ]]; then echo "Failed to set northd probe interval to ${northd_probe_interval}. retrying....." sleep 2 (( retries += 1 )) else echo "Successfully set northd probe interval to ${northd_probe_interval} ms" break fi done fi preStop: exec: command: - /bin/bash - -c - | echo "$(date -Iseconds) - stopping nbdb" /usr/share/ovn/scripts/ovn-ctl stop_nb_ovsdb echo "$(date -Iseconds) - nbdb stopped" rm -f /var/run/ovn/ovnnb_db.pid name: nbdb readinessProbe: exec: command: - /bin/bash - -c - | set -xeo pipefail /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=5 ovsdb-server/memory-trim-on-compaction on 2>/dev/null failureThreshold: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: requests: cpu: 10m memory: 10Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /run/openvswitch/ name: run-openvswitch - mountPath: /run/ovn/ name: run-ovn - mountPath: /env name: env-overrides - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-t7qn5 readOnly: true - command: - /bin/bash - -c - | set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping sbdb" /usr/share/ovn/scripts/ovn-ctl stop_sb_ovsdb echo "$(date -Iseconds) - sbdb stopped" rm -f /var/run/ovn/ovnsb_db.pid exit 0 } # end of quit trap quit TERM INT bracketify() { case "$1" in *:*) echo "[$1]" ;; *) echo "$1" ;; esac } compact() { sleep 15 while true; do /usr/bin/ovn-appctl -t /var/run/ovn/ovn${1}_db.ctl --timeout=5 ovsdb-server/compact 2>/dev/null || true sleep 600 done } # initialize variables db="sb" ovn_db_file="/etc/ovn/ovn${db}_db.db" OVN_ARGS="--db-sb-cluster-local-port=9644 --no-monitor" echo "$(date -Iseconds) - starting sbdb " exec /usr/share/ovn/scripts/ovn-ctl \ ${OVN_ARGS} \ --ovn-sb-log="-vconsole:${OVN_LOG_LEVEL} -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ run_sb_ovsdb & db_pid=$! compact $db & wait $db_pid env: - name: OVN_LOG_LEVEL value: info image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: - /bin/bash - -c - | set -x rm -f /var/run/ovn/ovnsb_db.pid preStop: exec: command: - /bin/bash - -c - | echo "$(date -Iseconds) - stopping sbdb" /usr/share/ovn/scripts/ovn-ctl stop_sb_ovsdb echo "$(date -Iseconds) - sbdb stopped" rm -f /var/run/ovn/ovnsb_db.pid name: sbdb readinessProbe: exec: command: - /bin/bash - -c - | set -xeo pipefail /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=5 ovsdb-server/memory-trim-on-compaction on 2>/dev/null failureThreshold: 3 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: requests: cpu: 10m memory: 10Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /run/openvswitch/ name: run-openvswitch - mountPath: /run/ovn/ name: run-ovn - mountPath: /env name: env-overrides - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-t7qn5 readOnly: true - command: - /bin/bash - -c - | set -xe if [[ -f "/env/_master" ]]; then set -o allexport source "/env/_master" set +o allexport fi # K8S_NODE_IP triggers reconcilation of this daemon when node IP changes echo "$(date -Iseconds) - starting ovnkube-master, Node: ${K8S_NODE} IP: ${K8S_NODE_IP}" echo "I$(date "+%m%d %H:%M:%S.%N") - copy ovn-k8s-cni-overlay" cp -f /usr/libexec/cni/ovn-k8s-cni-overlay /cni-bin-dir/ echo "I$(date "+%m%d %H:%M:%S.%N") - disable conntrack on geneve port" iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK echo "I$(date "+%m%d %H:%M:%S.%N") - starting ovnkube-node" gateway_mode_flags="--gateway-mode local --gateway-interface br-ex" sysctl net.ipv4.ip_forward=1 gw_interface_flag= # if br-ex1 is configured on the node, we want to use it for external gateway traffic if [ -d /sys/class/net/br-ex1 ]; then gw_interface_flag="--exgw-interface=br-ex1" # the functionality depends on ip_forwarding being enabled fi echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-master - start ovnkube --init-master ${K8S_NODE} --init-node ${K8S_NODE}" exec /usr/bin/ovnkube \ --init-master "${K8S_NODE}" \ --init-node "${K8S_NODE}" \ --config-file=/run/ovnkube-config/ovnkube.conf \ --loglevel "${OVN_KUBE_LOG_LEVEL}" \ ${gateway_mode_flags} \ ${gw_interface_flag} \ --inactivity-probe="180000" \ --nb-address "" \ --sb-address "" \ --enable-multicast \ --disable-snat-multiple-gws \ --acl-logging-rate-limit "20" env: - name: OVN_KUBE_LOG_LEVEL value: "4" - name: K8S_NODE valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: K8S_NODE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - rm - -f - /etc/cni/net.d/10-ovn-kubernetes.conf name: ovnkube-master readinessProbe: exec: command: - test - -f - /etc/cni/net.d/10-ovn-kubernetes.conf failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 10m memory: 60Mi securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /etc/systemd/system name: systemd-units readOnly: true - mountPath: /run/openvswitch/ name: run-openvswitch - mountPath: /run/ovn/ name: run-ovn - mountPath: /run/ovnkube-config/ name: ovnkube-config - mountPath: /var/lib/microshift/resources/kubeadmin name: kubeconfig - mountPath: /env name: env-overrides - mountPath: /etc/cni/net.d name: host-cni-netd - mountPath: /cni-bin-dir name: host-cni-bin - mountPath: /run/ovn-kubernetes/ name: host-run-ovn-kubernetes - mountPath: /dev/log name: log-socket - mountPath: /var/log/ovn name: node-log - mountPath: /host name: host-slash readOnly: true - mountPath: /run/netns mountPropagation: HostToContainer name: host-run-netns readOnly: true - mountPath: /etc/openvswitch name: etc-openvswitch-node - mountPath: /etc/ovn/ name: etc-openvswitch-node - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-t7qn5 readOnly: true dnsPolicy: Default enableServiceLinks: true hostNetwork: true hostPID: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: ovn-kubernetes-controller serviceAccountName: ovn-kubernetes-controller terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /etc/systemd/system type: "" name: systemd-units - hostPath: path: /var/run/openvswitch type: "" name: run-openvswitch - hostPath: path: /var/run/ovn type: "" name: run-ovn - hostPath: path: / type: "" name: host-slash - hostPath: path: /run/netns type: "" name: host-run-netns - hostPath: path: /etc/openvswitch type: "" name: etc-openvswitch-node - hostPath: path: /var/log/ovn type: "" name: node-log - hostPath: path: /dev/log type: "" name: log-socket - hostPath: path: /run/ovn-kubernetes type: "" name: host-run-ovn-kubernetes - hostPath: path: /etc/cni/net.d type: "" name: host-cni-netd - hostPath: path: /opt/cni/bin type: "" name: host-cni-bin - hostPath: path: /var/lib/microshift/resources/kubeadmin type: "" name: kubeconfig - configMap: defaultMode: 420 name: ovnkube-config name: ovnkube-config - configMap: defaultMode: 420 name: env-overrides optional: true name: env-overrides - name: kube-api-access-t7qn5 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:11Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:31Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:31Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:11Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://8ffa8d8bb0bce065da6545f64ac28109356a0580ca0e99d30eea5fd8938d6a57 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 lastState: {} name: nbdb ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:25Z" - containerID: cri-o://ead944ddf34591930a2dd893c8d869d22710452d650ed63cb987f8c8067c53bc image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 lastState: {} name: northd ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:25Z" - containerID: cri-o://46e65cefb89093d7c2805fbae3f3761b12c9aaa5a9b49b69fe100e83e94a17e7 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 lastState: {} name: ovnkube-master ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:26Z" - containerID: cri-o://9abd8f3ebb584beb37c8157ff3113b3f15b20b9a876800c646ec0963e0c3f19c image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 lastState: {} name: sbdb ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:26Z" hostIP: 10.0.0.2 phase: Running podIP: 10.0.0.2 podIPs: - ip: 10.0.0.2 qosClass: Burstable startTime: "2022-11-15T05:39:11Z" - apiVersion: v1 kind: Pod metadata: annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: "2022-11-15T05:39:11Z" generateName: ovnkube-node- labels: app: ovnkube-node component: network controller-revision-hash: 5ff7f6464d kubernetes.io/os: linux openshift.io/component: network pod-template-generation: "1" type: infra name: ovnkube-node-b5wd2 namespace: openshift-ovn-kubernetes ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: ovnkube-node uid: c5d65692-5924-4fe7-bbfb-7c2c7f4cfd98 resourceVersion: "533" uid: 360d9d36-e398-4853-b16f-91a1bdd86b68 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - release-ci-ci-op-k5cwk1pv-7cb14 containers: - command: - /bin/bash - -c - | set -e if [[ -f "/env/${K8S_NODE}" ]]; then set -o allexport source "/env/${K8S_NODE}" set +o allexport fi # K8S_NODE_IP triggers reconcilation of this daemon when node IP changes echo "$(date -Iseconds) - starting ovn-controller, Node: ${K8S_NODE} IP: ${K8S_NODE_IP}" exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off \ --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid \ --syslog-method="null" \ --log-file=/var/log/ovn/acl-audit-log.log \ -vFACILITY:"local0" \ -vconsole:"${OVN_LOG_LEVEL}" -vconsole:"acl_log:off" \ -vPATTERN:console:"%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ -vsyslog:"acl_log:info" \ -vfile:"acl_log:info" env: - name: OVN_LOG_LEVEL value: info - name: K8S_NODE valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: K8S_NODE_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imagePullPolicy: IfNotPresent name: ovn-controller resources: requests: cpu: 10m memory: 10Mi securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /run/openvswitch name: run-openvswitch - mountPath: /run/ovn/ name: run-ovn - mountPath: /etc/openvswitch name: etc-openvswitch - mountPath: /etc/ovn/ name: etc-openvswitch - mountPath: /var/lib/openvswitch name: var-lib-openvswitch - mountPath: /env name: env-overrides - mountPath: /var/log/ovn name: node-log - mountPath: /dev/log name: log-socket - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rpm6z readOnly: true dnsPolicy: Default enableServiceLinks: true hostNetwork: true hostPID: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: ovn-kubernetes-node serviceAccountName: ovn-kubernetes-node terminationGracePeriodSeconds: 30 tolerations: - operator: Exists volumes: - hostPath: path: /var/lib/openvswitch/data type: "" name: var-lib-openvswitch - hostPath: path: /etc/openvswitch type: "" name: etc-openvswitch - hostPath: path: /var/run/openvswitch type: "" name: run-openvswitch - hostPath: path: /var/run/ovn type: "" name: run-ovn - hostPath: path: /var/log/ovn type: "" name: node-log - hostPath: path: /dev/log type: "" name: log-socket - configMap: defaultMode: 420 name: env-overrides optional: true name: env-overrides - name: kube-api-access-rpm6z projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:11Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:25Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:25Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:11Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://774cb06ba4bc637a16073261ea2e73af8e20c1f658dd8b022fb3a385e1f15dc9 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 lastState: {} name: ovn-controller ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:25Z" hostIP: 10.0.0.2 phase: Running podIP: 10.0.0.2 podIPs: - ip: 10.0.0.2 qosClass: Burstable startTime: "2022-11-15T05:39:11Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.42.0.3/24"],"mac_address":"0a:58:0a:2a:00:03","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.3/24","gateway_ip":"10.42.0.1"}}' openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' creationTimestamp: "2022-11-15T05:39:10Z" generateName: service-ca-77fc4cc659- labels: app: service-ca pod-template-hash: 77fc4cc659 service-ca: "true" name: service-ca-77fc4cc659-dp8dn namespace: openshift-service-ca ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: service-ca-77fc4cc659 uid: 07f47f14-3345-4ae3-b734-29fa8434c9e9 resourceVersion: "617" uid: cf8cee7d-02cc-44f5-9e8d-0ff4621360aa spec: containers: - args: - -v=2 command: - service-ca-operator - controller image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7 imagePullPolicy: IfNotPresent name: service-ca-controller ports: - containerPort: 8443 protocol: TCP resources: requests: cpu: 10m memory: 120Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000070000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/signing-key name: signing-key - mountPath: /var/run/configmaps/signing-cabundle name: signing-cabundle - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-hxdl4 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 nodeSelector: node-role.kubernetes.io/master: "" preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000070000 seLinuxOptions: level: s0:c8,c7 seccompProfile: type: RuntimeDefault serviceAccount: service-ca serviceAccountName: service-ca terminationGracePeriodSeconds: 30 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: signing-key secret: defaultMode: 420 secretName: signing-key - configMap: defaultMode: 420 name: signing-cabundle name: signing-cabundle - name: kube-api-access-hxdl4 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:37Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:37Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://50ac092ec2bc547d58015508d68066100c00695e649613d9e69c84a0d3c1128a image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7 lastState: {} name: service-ca-controller ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:37Z" hostIP: 10.0.0.2 phase: Running podIP: 10.42.0.3 podIPs: - ip: 10.42.0.3 qosClass: Burstable startTime: "2022-11-15T05:39:33Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.42.0.7/24"],"mac_address":"0a:58:0a:2a:00:07","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.7/24","gateway_ip":"10.42.0.1"}}' creationTimestamp: "2022-11-15T05:39:10Z" generateName: topolvm-controller-8456864f89- labels: app.kubernetes.io/name: topolvm-controller pod-template-hash: 8456864f89 name: topolvm-controller-8456864f89-vg42d namespace: openshift-storage ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: topolvm-controller-8456864f89 uid: f103bdbd-e7d1-4cfd-b177-ea0a800a5e05 resourceVersion: "713" uid: 9756b5e3-88df-4742-a05c-c5bbceab89ca spec: containers: - command: - /topolvm-controller - --cert-dir=/certs image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 10 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 3 name: topolvm-controller ports: - containerPort: 9808 name: healthz protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /metrics port: 8080 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 250m memory: 250Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: socket-dir - mountPath: /certs name: certs - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fnrvt readOnly: true - args: - --csi-address=/run/topolvm/csi-topolvm.sock - --enable-capacity - --capacity-ownerref-level=2 - --capacity-poll-interval=30s - --feature-gates=Topology=true env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2 imagePullPolicy: IfNotPresent name: csi-provisioner resources: requests: cpu: 100m memory: 100Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: socket-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fnrvt readOnly: true - args: - --csi-address=/run/topolvm/csi-topolvm.sock image: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898 imagePullPolicy: IfNotPresent name: csi-resizer resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: socket-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fnrvt readOnly: true - args: - --csi-address=/run/topolvm/csi-topolvm.sock image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e imagePullPolicy: IfNotPresent name: liveness-probe resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: socket-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fnrvt readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true initContainers: - command: - /usr/bin/bash - -c - openssl req -nodes -x509 -newkey rsa:4096 -subj '/DC=self_signed_certificate' -keyout /certs/tls.key -out /certs/tls.crt -days 3650 image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b imagePullPolicy: IfNotPresent name: self-signed-cert-generator resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /certs name: certs - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fnrvt readOnly: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: topolvm-controller serviceAccountName: topolvm-controller terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: socket-dir - emptyDir: {} name: certs - name: kube-api-access-fnrvt projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:38Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:53Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:53Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://ee859c97d7835a634878a3bd6a56621bc6aa299ad4e046a6d83a21646a43bbc7 image: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2 imageID: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2 lastState: {} name: csi-provisioner ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:47Z" - containerID: cri-o://a08e4f6a696fda341f50830b1a2aee1a4e771813cfe04f5830a029366ed5ef06 image: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898 imageID: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898 lastState: {} name: csi-resizer ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:50Z" - containerID: cri-o://c2d9c8e2edf1ff2fb5351b7c35d3260c7b2ca7f29fab76bcfcb9e5bfa9037a62 image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e imageID: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e lastState: {} name: liveness-probe ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:53Z" - containerID: cri-o://f2f5d3385c45532b43affd2ccaf5fd88e8032d2606725c3d7a20690fe30c437d image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad imageID: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad lastState: {} name: topolvm-controller ready: true restartCount: 0 started: true state: running: startedAt: "2022-11-15T05:39:43Z" hostIP: 10.0.0.2 initContainerStatuses: - containerID: cri-o://4d3eb9670384204928984c5c4142be7f5e984b21e213d0c36934f1460bfbd076 image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b imageID: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b lastState: {} name: self-signed-cert-generator ready: true restartCount: 0 state: terminated: containerID: cri-o://4d3eb9670384204928984c5c4142be7f5e984b21e213d0c36934f1460bfbd076 exitCode: 0 finishedAt: "2022-11-15T05:39:38Z" reason: Completed startedAt: "2022-11-15T05:39:37Z" phase: Running podIP: 10.42.0.7 podIPs: - ip: 10.42.0.7 qosClass: Burstable startTime: "2022-11-15T05:39:33Z" - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.42.0.5/24"],"mac_address":"0a:58:0a:2a:00:05","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.5/24","gateway_ip":"10.42.0.1"}}' odf-lvm.microshift.io/lvmd_config_sha256sum: cd811881ede06f69cba50cf9408e349ccf3edca76a9aec93d8cd35ba04e6033d creationTimestamp: "2022-11-15T05:39:33Z" generateName: topolvm-node- labels: app: topolvm-node controller-revision-hash: 58576f646 pod-template-generation: "1" name: topolvm-node-2tnh5 namespace: openshift-storage ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: DaemonSet name: topolvm-node uid: d572a74f-279f-475b-97be-a82353f70a89 resourceVersion: "619" uid: 30861194-030d-40d4-86be-44594d858fac spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - release-ci-ci-op-k5cwk1pv-7cb14 containers: - command: - /lvmd - --config=/etc/topolvm/lvmd.yaml - --container=true image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad imagePullPolicy: IfNotPresent name: lvmd resources: requests: cpu: 250m memory: 250Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/lvmd name: lvmd-socket-dir - mountPath: /etc/topolvm name: lvmd-config-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jc2nm readOnly: true - command: - /topolvm-node - --lvmd-socket=/run/lvmd/lvmd.socket env: - name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: healthz scheme: HTTP initialDelaySeconds: 10 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 3 name: topolvm-node ports: - containerPort: 9808 name: healthz protocol: TCP resources: requests: cpu: 250m memory: 250Mi securityContext: privileged: true runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: node-plugin-dir - mountPath: /run/lvmd name: lvmd-socket-dir - mountPath: /var/lib/kubelet/pods mountPropagation: Bidirectional name: pod-volumes-dir - mountPath: /var/lib/kubelet/plugins/kubernetes.io/csi mountPropagation: Bidirectional name: csi-plugin-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jc2nm readOnly: true - args: - --csi-address=/run/topolvm/csi-topolvm.sock - --kubelet-registration-path=/var/lib/kubelet/plugins/topolvm.cybozu.com/node/csi-topolvm.sock image: registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /bin/sh - -c - rm -rf /registration/topolvm.cybozu.com /registration/topolvm.cybozu.com-reg.sock name: csi-registrar resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: node-plugin-dir - mountPath: /registration name: registration-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jc2nm readOnly: true - args: - --csi-address=/run/topolvm/csi-topolvm.sock image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e imagePullPolicy: IfNotPresent name: liveness-probe resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /run/topolvm name: node-plugin-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jc2nm readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostPID: true initContainers: - command: - /usr/bin/bash - -c - until [ -f /etc/topolvm/lvmd.yaml ]; do echo waiting for lvmd config file; sleep 5; done image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b imagePullPolicy: IfNotPresent name: file-checker resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/topolvm name: lvmd-config-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jc2nm readOnly: true nodeName: release-ci-ci-op-k5cwk1pv-7cb14 preemptionPolicy: PreemptLowerPriority priority: 2000001000 priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: topolvm-node serviceAccountName: topolvm-node terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists volumes: - hostPath: path: /var/lib/kubelet/plugins_registry/ type: Directory name: registration-dir - hostPath: path: /var/lib/kubelet/plugins/topolvm.cybozu.com/node type: DirectoryOrCreate name: node-plugin-dir - hostPath: path: /var/lib/kubelet/plugins/kubernetes.io/csi type: DirectoryOrCreate name: csi-plugin-dir - hostPath: path: /var/lib/kubelet/pods/ type: DirectoryOrCreate name: pod-volumes-dir - configMap: defaultMode: 420 items: - key: lvmd.yaml path: lvmd.yaml name: lvmd name: lvmd-config-dir - emptyDir: medium: Memory name: lvmd-socket-dir - name: kube-api-access-jc2nm projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:37Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" message: 'containers with unready status: [lvmd topolvm-node csi-registrar liveness-probe]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" message: 'containers with unready status: [lvmd topolvm-node csi-registrar liveness-probe]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2022-11-15T05:39:33Z" status: "True" type: PodScheduled containerStatuses: - image: registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb imageID: "" lastState: {} name: csi-registrar ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e imageID: "" lastState: {} name: liveness-probe ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad imageID: "" lastState: {} name: lvmd ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing - image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad imageID: "" lastState: {} name: topolvm-node ready: false restartCount: 0 started: false state: waiting: reason: PodInitializing hostIP: 10.0.0.2 initContainerStatuses: - containerID: cri-o://a4656d515dbf6e259ef944a9ae945e263ea333263991044147fbfe621bf46b86 image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b imageID: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b lastState: {} name: file-checker ready: true restartCount: 0 state: terminated: containerID: cri-o://a4656d515dbf6e259ef944a9ae945e263ea333263991044147fbfe621bf46b86 exitCode: 0 finishedAt: "2022-11-15T05:39:37Z" reason: Completed startedAt: "2022-11-15T05:39:37Z" phase: Pending podIP: 10.42.0.5 podIPs: - ip: 10.42.0.5 qosClass: Burstable startTime: "2022-11-15T05:39:33Z" kind: List metadata: resourceVersion: "" + kubectl get events -A NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE default Warning KubeAPIReadyz namespace/openshift-kube-apiserver readyz=true default 9m37s Normal Starting node/release-ci-ci-op-k5cwk1pv-7cb14 Starting kubelet. default 9m37s Normal NodeHasSufficientMemory node/release-ci-ci-op-k5cwk1pv-7cb14 Node release-ci-ci-op-k5cwk1pv-7cb14 status is now: NodeHasSufficientMemory default 9m37s Normal NodeHasNoDiskPressure node/release-ci-ci-op-k5cwk1pv-7cb14 Node release-ci-ci-op-k5cwk1pv-7cb14 status is now: NodeHasNoDiskPressure default 9m37s Normal NodeHasSufficientPID node/release-ci-ci-op-k5cwk1pv-7cb14 Node release-ci-ci-op-k5cwk1pv-7cb14 status is now: NodeHasSufficientPID default 9m32s Normal NodeAllocatableEnforced node/release-ci-ci-op-k5cwk1pv-7cb14 Updated Node Allocatable limit across pods default 9m29s Normal RegisteredNode node/release-ci-ci-op-k5cwk1pv-7cb14 Node release-ci-ci-op-k5cwk1pv-7cb14 event: Registered Node release-ci-ci-op-k5cwk1pv-7cb14 in Controller default 9m6s Normal NodeReady node/release-ci-ci-op-k5cwk1pv-7cb14 Node release-ci-ci-op-k5cwk1pv-7cb14 status is now: NodeReady openshift-dns 9m6s Normal Scheduled pod/dns-default-tw2xt Successfully assigned openshift-dns/dns-default-tw2xt to release-ci-ci-op-k5cwk1pv-7cb14 openshift-dns 9m2s Warning FailedMount pod/dns-default-tw2xt MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found openshift-dns 8m57s Normal Pulling pod/dns-default-tw2xt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840" openshift-dns 8m54s Normal Pulled pod/dns-default-tw2xt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840" in 3.120926845s openshift-dns 8m54s Normal Created pod/dns-default-tw2xt Created container dns openshift-dns 8m53s Normal Started pod/dns-default-tw2xt Started container dns openshift-dns 8m53s Normal Pulling pod/dns-default-tw2xt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347" openshift-dns 8m50s Normal Pulled pod/dns-default-tw2xt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347" in 3.258941881s openshift-dns 8m50s Normal Created pod/dns-default-tw2xt Created container kube-rbac-proxy openshift-dns 8m50s Normal Started pod/dns-default-tw2xt Started container kube-rbac-proxy openshift-dns 9m6s Normal SuccessfulCreate daemonset/dns-default Created pod: dns-default-tw2xt openshift-dns 9m28s Normal Scheduled pod/node-resolver-jhcw4 Successfully assigned openshift-dns/node-resolver-jhcw4 to release-ci-ci-op-k5cwk1pv-7cb14 openshift-dns 9m21s Normal Pulling pod/node-resolver-jhcw4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff" openshift-dns 9m16s Normal Pulled pod/node-resolver-jhcw4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff" in 4.102921117s openshift-dns 9m16s Normal Created pod/node-resolver-jhcw4 Created container dns-node-resolver openshift-dns 9m16s Normal Started pod/node-resolver-jhcw4 Started container dns-node-resolver openshift-dns 9m28s Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-jhcw4 openshift-ingress 9m28s Warning FailedScheduling pod/router-default-76b7657c68-6xcfc 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. openshift-ingress 9m6s Normal Scheduled pod/router-default-76b7657c68-6xcfc Successfully assigned openshift-ingress/router-default-76b7657c68-6xcfc to release-ci-ci-op-k5cwk1pv-7cb14 openshift-ingress 9m2s Warning FailedMount pod/router-default-76b7657c68-6xcfc MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt openshift-ingress 9m4s Normal TaintManagerEviction pod/router-default-76b7657c68-6xcfc Cancelling deletion of Pod openshift-ingress/router-default-76b7657c68-6xcfc openshift-ingress 8m57s Normal Pulling pod/router-default-76b7657c68-6xcfc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7" openshift-ingress 8m54s Normal Pulled pod/router-default-76b7657c68-6xcfc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7" in 3.118996826s openshift-ingress 8m54s Normal Created pod/router-default-76b7657c68-6xcfc Created container router openshift-ingress 8m54s Normal Started pod/router-default-76b7657c68-6xcfc Started container router openshift-ingress 9m28s Normal SuccessfulCreate replicaset/router-default-76b7657c68 Created pod: router-default-76b7657c68-6xcfc openshift-ingress 9m29s Normal ScalingReplicaSet deployment/router-default Scaled up replica set router-default-76b7657c68 to 1 openshift-kube-controller-manager 9m42s Warning FastControllerResync namespace/openshift-kube-controller-manager Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 9m42s Warning FastControllerResync namespace/openshift-kube-controller-manager Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for kube-system namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for kube-public namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for kube-node-lease namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for default namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for openshift-kube-controller-manager namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for openshift-infra namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for openshift-route-controller-manager namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for openshift-service-ca namespace openshift-kube-controller-manager 9m31s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager created SCC ranges for openshift-storage namespace openshift-kube-controller-manager 9m30s Normal CreatedSCCRanges namespace/openshift-kube-controller-manager (combined from similar events): created SCC ranges for openshift-ovn-kubernetes namespace openshift-ovn-kubernetes 9m13s Normal LeaderElection configmap/ovn-kubernetes-master release-ci-ci-op-k5cwk1pv-7cb14 became leader openshift-ovn-kubernetes 9m13s Normal LeaderElection lease/ovn-kubernetes-master release-ci-ci-op-k5cwk1pv-7cb14 became leader openshift-ovn-kubernetes 9m28s Normal Scheduled pod/ovnkube-master-kdsb7 Successfully assigned openshift-ovn-kubernetes/ovnkube-master-kdsb7 to release-ci-ci-op-k5cwk1pv-7cb14 openshift-ovn-kubernetes 9m21s Normal Pulling pod/ovnkube-master-kdsb7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" openshift-ovn-kubernetes 9m14s Normal Pulled pod/ovnkube-master-kdsb7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" in 6.615139723s openshift-ovn-kubernetes 9m14s Normal Created pod/ovnkube-master-kdsb7 Created container northd openshift-ovn-kubernetes 9m14s Normal Started pod/ovnkube-master-kdsb7 Started container northd openshift-ovn-kubernetes 9m14s Normal Pulled pod/ovnkube-master-kdsb7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" already present on machine openshift-ovn-kubernetes 9m14s Normal Created pod/ovnkube-master-kdsb7 Created container nbdb openshift-ovn-kubernetes 9m14s Normal Started pod/ovnkube-master-kdsb7 Started container nbdb openshift-ovn-kubernetes 9m13s Normal Pulled pod/ovnkube-master-kdsb7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" already present on machine openshift-ovn-kubernetes 9m13s Normal Created pod/ovnkube-master-kdsb7 Created container sbdb openshift-ovn-kubernetes 9m13s Normal Started pod/ovnkube-master-kdsb7 Started container sbdb openshift-ovn-kubernetes 9m13s Normal Pulled pod/ovnkube-master-kdsb7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" already present on machine openshift-ovn-kubernetes 9m13s Normal Created pod/ovnkube-master-kdsb7 Created container ovnkube-master openshift-ovn-kubernetes 9m13s Normal Started pod/ovnkube-master-kdsb7 Started container ovnkube-master openshift-ovn-kubernetes 9m28s Normal SuccessfulCreate daemonset/ovnkube-master Created pod: ovnkube-master-kdsb7 openshift-ovn-kubernetes 9m28s Normal Scheduled pod/ovnkube-node-b5wd2 Successfully assigned openshift-ovn-kubernetes/ovnkube-node-b5wd2 to release-ci-ci-op-k5cwk1pv-7cb14 openshift-ovn-kubernetes 9m21s Normal Pulling pod/ovnkube-node-b5wd2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" openshift-ovn-kubernetes 9m14s Normal Pulled pod/ovnkube-node-b5wd2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" in 6.620824362s openshift-ovn-kubernetes 9m14s Normal Created pod/ovnkube-node-b5wd2 Created container ovn-controller openshift-ovn-kubernetes 9m14s Normal Started pod/ovnkube-node-b5wd2 Started container ovn-controller openshift-ovn-kubernetes 9m28s Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-b5wd2 openshift-route-controller-manager 9m42s Normal LeaderElection lease/openshift-route-controllers release-ci-ci-op-k5cwk1pv-7cb14 became leader openshift-service-ca 9m28s Warning FailedScheduling pod/service-ca-77fc4cc659-dp8dn 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. openshift-service-ca 9m6s Normal Scheduled pod/service-ca-77fc4cc659-dp8dn Successfully assigned openshift-service-ca/service-ca-77fc4cc659-dp8dn to release-ci-ci-op-k5cwk1pv-7cb14 openshift-service-ca 9m5s Normal Pulling pod/service-ca-77fc4cc659-dp8dn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7" openshift-service-ca 9m4s Normal TaintManagerEviction pod/service-ca-77fc4cc659-dp8dn Cancelling deletion of Pod openshift-service-ca/service-ca-77fc4cc659-dp8dn openshift-service-ca 9m2s Normal Pulled pod/service-ca-77fc4cc659-dp8dn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7" in 3.004804839s openshift-service-ca 9m2s Normal Created pod/service-ca-77fc4cc659-dp8dn Created container service-ca-controller openshift-service-ca 9m2s Normal Started pod/service-ca-77fc4cc659-dp8dn Started container service-ca-controller openshift-service-ca 9m28s Normal SuccessfulCreate replicaset/service-ca-77fc4cc659 Created pod: service-ca-77fc4cc659-dp8dn openshift-service-ca 9m1s Normal LeaderElection configmap/service-ca-controller-lock service-ca-77fc4cc659-dp8dn_e2ac9889-b171-4edc-a0e0-51283b761592 became leader openshift-service-ca 9m1s Normal LeaderElection lease/service-ca-controller-lock service-ca-77fc4cc659-dp8dn_e2ac9889-b171-4edc-a0e0-51283b761592 became leader openshift-service-ca 9m29s Normal ScalingReplicaSet deployment/service-ca Scaled up replica set service-ca-77fc4cc659 to 1 openshift-service-ca 9m1s Warning ClusterInfrastructureStatus deployment/service-ca unable to get cluster infrastructure status, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "APIServiceCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "ConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "CRDCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "MutatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "ValidatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "LegacyVulnerableConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "ServiceServingCertController" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 9m1s Warning FastControllerResync deployment/service-ca Controller "ServiceServingCertUpdateController" resync interval is set to 0s which might lead to client request throttling openshift-storage 9m28s Warning FailedScheduling pod/topolvm-controller-8456864f89-vg42d 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. openshift-storage 9m6s Normal Scheduled pod/topolvm-controller-8456864f89-vg42d Successfully assigned openshift-storage/topolvm-controller-8456864f89-vg42d to release-ci-ci-op-k5cwk1pv-7cb14 openshift-storage 9m5s Normal Pulling pod/topolvm-controller-8456864f89-vg42d Pulling image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" openshift-storage 9m4s Normal TaintManagerEviction pod/topolvm-controller-8456864f89-vg42d Cancelling deletion of Pod openshift-storage/topolvm-controller-8456864f89-vg42d openshift-storage 9m2s Normal Pulled pod/topolvm-controller-8456864f89-vg42d Successfully pulled image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" in 2.863748219s openshift-storage 9m2s Normal Created pod/topolvm-controller-8456864f89-vg42d Created container self-signed-cert-generator openshift-storage 9m2s Normal Started pod/topolvm-controller-8456864f89-vg42d Started container self-signed-cert-generator openshift-storage 9m1s Normal Pulling pod/topolvm-controller-8456864f89-vg42d Pulling image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" openshift-storage 8m56s Normal Pulled pod/topolvm-controller-8456864f89-vg42d Successfully pulled image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" in 4.71614729s openshift-storage 8m56s Normal Created pod/topolvm-controller-8456864f89-vg42d Created container topolvm-controller openshift-storage 8m56s Normal Started pod/topolvm-controller-8456864f89-vg42d Started container topolvm-controller openshift-storage 8m56s Normal Pulling pod/topolvm-controller-8456864f89-vg42d Pulling image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2" openshift-storage 8m52s Normal Pulled pod/topolvm-controller-8456864f89-vg42d Successfully pulled image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2" in 3.738667132s openshift-storage 8m52s Normal Created pod/topolvm-controller-8456864f89-vg42d Created container csi-provisioner openshift-storage 8m52s Normal Started pod/topolvm-controller-8456864f89-vg42d Started container csi-provisioner openshift-storage 8m52s Normal Pulling pod/topolvm-controller-8456864f89-vg42d Pulling image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898" openshift-storage 8m49s Normal Pulled pod/topolvm-controller-8456864f89-vg42d Successfully pulled image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898" in 2.966738492s openshift-storage 8m49s Normal Created pod/topolvm-controller-8456864f89-vg42d Created container csi-resizer openshift-storage 8m49s Normal Started pod/topolvm-controller-8456864f89-vg42d Started container csi-resizer openshift-storage 8m49s Normal Pulling pod/topolvm-controller-8456864f89-vg42d Pulling image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e" openshift-storage 8m46s Normal Pulled pod/topolvm-controller-8456864f89-vg42d Successfully pulled image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e" in 2.426765222s openshift-storage 8m46s Normal Created pod/topolvm-controller-8456864f89-vg42d Created container liveness-probe openshift-storage 8m46s Normal Started pod/topolvm-controller-8456864f89-vg42d Started container liveness-probe openshift-storage 9m29s Normal SuccessfulCreate replicaset/topolvm-controller-8456864f89 Created pod: topolvm-controller-8456864f89-vg42d openshift-storage 9m29s Normal ScalingReplicaSet deployment/topolvm-controller Scaled up replica set topolvm-controller-8456864f89 to 1 openshift-storage 9m6s Normal Scheduled pod/topolvm-node-2tnh5 Successfully assigned openshift-storage/topolvm-node-2tnh5 to release-ci-ci-op-k5cwk1pv-7cb14 openshift-storage 9m5s Normal Pulling pod/topolvm-node-2tnh5 Pulling image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" openshift-storage 9m2s Normal Pulled pod/topolvm-node-2tnh5 Successfully pulled image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" in 2.862705905s openshift-storage 9m2s Normal Created pod/topolvm-node-2tnh5 Created container file-checker openshift-storage 9m2s Normal Started pod/topolvm-node-2tnh5 Started container file-checker openshift-storage 9m2s Normal Pulling pod/topolvm-node-2tnh5 Pulling image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" openshift-storage 8m56s Normal Pulled pod/topolvm-node-2tnh5 Successfully pulled image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" in 5.723956857s openshift-storage 8m56s Normal Created pod/topolvm-node-2tnh5 Created container lvmd openshift-storage 8m56s Normal Started pod/topolvm-node-2tnh5 Started container lvmd openshift-storage 8m56s Normal Pulled pod/topolvm-node-2tnh5 Container image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" already present on machine openshift-storage 8m56s Normal Created pod/topolvm-node-2tnh5 Created container topolvm-node openshift-storage 8m56s Normal Started pod/topolvm-node-2tnh5 Started container topolvm-node openshift-storage 8m56s Normal Pulling pod/topolvm-node-2tnh5 Pulling image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb" openshift-storage 8m52s Normal Pulled pod/topolvm-node-2tnh5 Successfully pulled image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb" in 3.640322104s openshift-storage 8m52s Normal Created pod/topolvm-node-2tnh5 Created container csi-registrar openshift-storage 8m52s Normal Started pod/topolvm-node-2tnh5 Started container csi-registrar openshift-storage 8m52s Normal Pulling pod/topolvm-node-2tnh5 Pulling image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e" openshift-storage 9m6s Normal SuccessfulCreate daemonset/topolvm-node Created pod: topolvm-node-2tnh5 openshift-storage 8m56s Normal LeaderElection configmap/topolvm topolvm-controller-8456864f89-vg42d_ee92266d-9378-461c-ba6e-4ca77adb0b1c became leader openshift-storage 8m56s Normal LeaderElection lease/topolvm topolvm-controller-8456864f89-vg42d_ee92266d-9378-461c-ba6e-4ca77adb0b1c became leader ++ kubectl get namespace -o 'jsonpath={.items..metadata.name}' + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n default -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n kube-node-lease -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n kube-public -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n kube-system -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-dns -o name + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-dns pod/dns-default-tw2xt Name: dns-default-tw2xt Namespace: openshift-dns Priority: 2000001000 Priority Class Name: system-node-critical Service Account: dns Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:33 +0000 Labels: controller-revision-hash=85975f57cd dns.operator.openshift.io/daemonset-dns=default pod-template-generation=1 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.6/24"],"mac_address":"0a:58:0a:2a:00:06","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.6/24","gat... target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.42.0.6 IPs: IP: 10.42.0.6 Controlled By: DaemonSet/dns-default Containers: dns: Container ID: cri-o://31d008edcb11e830255afd1bef37f38cebd0797f00c80cda5b3228e3d34f0475 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840 Ports: 5353/UDP, 5353/TCP Host Ports: 0/UDP, 0/TCP Command: coredns Args: -conf /etc/coredns/Corefile State: Running Started: Tue, 15 Nov 2022 05:39:45 +0000 Ready: True Restart Count: 0 Requests: cpu: 50m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8181/ready delay=10s timeout=3s period=3s #success=1 #failure=3 Environment: Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2bt8l (ro) kube-rbac-proxy: Container ID: cri-o://aa910d7f1e329bfbe5da330c4849413c959d4bc1dec6a197ab65d88c18ae7ef0 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347 Port: 9154/TCP Host Port: 0/TCP Args: --logtostderr --secure-listen-address=:9154 --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --upstream=http://127.0.0.1:9153/ --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key State: Running Started: Tue, 15 Nov 2022 05:39:49 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 40Mi Environment: Mounts: /etc/tls/private from metrics-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2bt8l (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: dns-default Optional: false metrics-tls: Type: Secret (a volume populated by a Secret) SecretName: dns-default-metrics-tls Optional: false kube-api-access-2bt8l: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node-role.kubernetes.io/master op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m6s default-scheduler Successfully assigned openshift-dns/dns-default-tw2xt to release-ci-ci-op-k5cwk1pv-7cb14 Warning FailedMount 9m3s (x4 over 9m7s) kubelet MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found Normal Pulling 8m58s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840" Normal Pulled 8m55s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:efa3a9aae6ad83d0eec44b654e75a11ed4887a4d25f4a7412a102456688a5840" in 3.120926845s Normal Created 8m55s kubelet Created container dns Normal Started 8m54s kubelet Started container dns Normal Pulling 8m54s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347" Normal Pulled 8m51s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b75e8e24ef2235370454dd557ae59f93bf47c3e17653109ab7d869633728347" in 3.258941881s Normal Created 8m51s kubelet Created container kube-rbac-proxy Normal Started 8m51s kubelet Started container kube-rbac-proxy ++ kubectl get -n openshift-dns pod/dns-default-tw2xt -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-dns pod/dns-default-tw2xt dns .:5353 hostname.bind.:5353 [INFO] plugin/reload: Running configuration SHA512 = e100c1081a47648310f72de96fbdbe31f928f02784eda1155c53be749ad04c434e50da55f960a800606274fb080d8a1f79df7effa47afa9a02bddd9f96192e18 CoreDNS-1.10.0 linux/amd64, go1.19.2, + kubectl logs --previous=true -n openshift-dns pod/dns-default-tw2xt dns Error from server (BadRequest): previous terminated container "dns" in pod "dns-default-tw2xt" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-dns pod/dns-default-tw2xt kube-rbac-proxy I1115 05:39:49.426898 1 main.go:187] Valid token audiences: I1115 05:39:49.427016 1 main.go:337] Reading certificate files I1115 05:39:49.427164 1 main.go:371] Starting TCP socket on :9154 I1115 05:39:49.427552 1 main.go:378] Listening securely on :9154 + kubectl logs --previous=true -n openshift-dns pod/dns-default-tw2xt kube-rbac-proxy Error from server (BadRequest): previous terminated container "kube-rbac-proxy" in pod "dns-default-tw2xt" not found + true + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-dns pod/node-resolver-jhcw4 Name: node-resolver-jhcw4 Namespace: openshift-dns Priority: 2000001000 Priority Class Name: system-node-critical Service Account: node-resolver Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:11 +0000 Labels: controller-revision-hash=6db77686b5 dns.operator.openshift.io/daemonset-node-resolver= pod-template-generation=1 Annotations: target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.0.0.2 IPs: IP: 10.0.0.2 Controlled By: DaemonSet/node-resolver Containers: dns-node-resolver: Container ID: cri-o://85cdd2d22091838b73c221cb976a422c4facea28fc12aae044cffd2e3f2748fe Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff Port: Host Port: Command: /bin/bash -c #!/bin/bash set -uo pipefail trap 'jobs -p | xargs kill || true; wait; exit 0' TERM NAMESERVER=${DNS_DEFAULT_SERVICE_HOST} OPENSHIFT_MARKER="openshift-generated-node-resolver" HOSTS_FILE="/etc/hosts" TEMP_FILE="/etc/hosts.tmp" IFS=', ' read -r -a services <<< "${SERVICES}" # Make a temporary file with the old hosts file's attributes. cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}" while true; do declare -A svc_ips for svc in "${services[@]}"; do # Fetch service IP from cluster dns if present. We make several tries # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones # are for deployments with Kuryr on older OpenStack (OSP13) - those do not # support UDP loadbalancers and require reaching DNS through TCP. cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') for i in ${!cmds[*]} do ips=($(eval "${cmds[i]}")) if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then svc_ips["${svc}"]="${ips[@]}" break fi done done # Update /etc/hosts only if we get valid service IPs # We will not update /etc/hosts when there is coredns service outage or api unavailability # Stale entries could exist in /etc/hosts if the service is deleted if [[ -n "${svc_ips[*]-}" ]]; then # Build a new hosts file from /etc/hosts with our custom entries filtered out grep -v "# ${OPENSHIFT_MARKER}" "${HOSTS_FILE}" > "${TEMP_FILE}" # Append resolver entries for services for svc in "${!svc_ips[@]}"; do for ip in ${svc_ips[${svc}]}; do echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" done done # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior # Replace /etc/hosts with our modified version if needed cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn fi sleep 60 & wait unset svc_ips done State: Running Started: Tue, 15 Nov 2022 05:39:23 +0000 Ready: True Restart Count: 0 Requests: cpu: 5m memory: 21Mi Environment: SERVICES: image-registry.openshift-image-registry.svc NAMESERVER: 172.30.0.10 CLUSTER_DOMAIN: cluster.local Mounts: /etc/hosts from hosts-file (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v2wfq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: hosts-file: Type: HostPath (bare host directory volume) Path: /etc/hosts HostPathType: File kube-api-access-v2wfq: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m29s default-scheduler Successfully assigned openshift-dns/node-resolver-jhcw4 to release-ci-ci-op-k5cwk1pv-7cb14 Normal Pulling 9m22s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff" Normal Pulled 9m17s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24bfe9543bd34c6c3124c39c319ef0ec20534aec974126617752b2883d6d6cff" in 4.102921117s Normal Created 9m17s kubelet Created container dns-node-resolver Normal Started 9m17s kubelet Started container dns-node-resolver ++ kubectl get -n openshift-dns pod/node-resolver-jhcw4 -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-dns pod/node-resolver-jhcw4 dns-node-resolver + kubectl logs --previous=true -n openshift-dns pod/node-resolver-jhcw4 dns-node-resolver Error from server (BadRequest): previous terminated container "dns-node-resolver" in pod "node-resolver-jhcw4" not found + true + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-infra -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-ingress -o name + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-ingress pod/router-default-76b7657c68-6xcfc Name: router-default-76b7657c68-6xcfc Namespace: openshift-ingress Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: router Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:33 +0000 Labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default pod-template-hash=76b7657c68 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.4/24"],"mac_address":"0a:58:0a:2a:00:04","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.4/24","gat... openshift.io/scc: hostnetwork target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} unsupported.do-not-use.openshift.io/override-liveness-grace-period-seconds: 10 Status: Running IP: 10.42.0.4 IPs: IP: 10.42.0.4 Controlled By: ReplicaSet/router-default-76b7657c68 Containers: router: Container ID: cri-o://ed4a629a259f8e085d11c43c63fcd10eb44f947f8f8a3fbb04f4b0d2c26330b4 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7 Ports: 80/TCP, 443/TCP, 1936/TCP Host Ports: 80/TCP, 443/TCP, 0/TCP State: Running Started: Tue, 15 Nov 2022 05:39:45 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 256Mi Liveness: http-get http://:1936/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3 Startup: http-get http://:1936/healthz/ready delay=0s timeout=1s period=1s #success=1 #failure=120 Environment: ROUTER_SERVICE_NAMESPACE: openshift-ingress DEFAULT_CERTIFICATE_DIR: /etc/pki/tls/private DEFAULT_DESTINATION_CA_PATH: /var/run/configmaps/service-ca/service-ca.crt STATS_PORT: 1936 RELOAD_INTERVAL: 5s ROUTER_ALLOW_WILDCARD_ROUTES: false ROUTER_CANONICAL_HOSTNAME: router-default.apps.cluster.local ROUTER_CIPHERS: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 ROUTER_CIPHERSUITES: TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ROUTER_DISABLE_HTTP2: true ROUTER_DISABLE_NAMESPACE_OWNERSHIP_CHECK: false ROUTER_LOAD_BALANCE_ALGORITHM: random ROUTER_METRICS_TYPE: haproxy ROUTER_SERVICE_NAME: default ROUTER_SET_FORWARDED_HEADERS: append ROUTER_TCP_BALANCE_SCHEME: source ROUTER_THREADS: 4 ROUTER_USE_PROXY_PROTOCOL: true SSL_MIN_VERSION: TLSv1.2 Mounts: /etc/pki/tls/private from default-certificate (ro) /var/run/configmaps/service-ca from service-ca-bundle (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5fk5k (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-certificate: Type: Secret (a volume populated by a Secret) SecretName: router-certs-default Optional: false service-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: service-ca-bundle Optional: false kube-api-access-5fk5k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux node-role.kubernetes.io/worker= Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 9m30s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. Normal Scheduled 9m7s default-scheduler Successfully assigned openshift-ingress/router-default-76b7657c68-6xcfc to release-ci-ci-op-k5cwk1pv-7cb14 Warning FailedMount 9m4s (x4 over 9m8s) kubelet MountVolume.SetUp failed for volume "service-ca-bundle" : configmap references non-existent config key: service-ca.crt Normal Pulling 8m59s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7" Normal Pulled 8m56s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e14189c0096c2294368a1a8edd7dec5f30c93f8bbd614da0e78127c8b194ab7" in 3.118996826s Normal Created 8m56s kubelet Created container router Normal Started 8m56s kubelet Started container router ++ kubectl get -n openshift-ingress pod/router-default-76b7657c68-6xcfc -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-ingress pod/router-default-76b7657c68-6xcfc router I1115 05:39:46.019608 1 template.go:437] router "msg"="starting router" "version"="majorFromGit: \nminorFromGit: \ncommitFromGit: 3065f6583f3925328fbdbfe95e3bc7bb7a084d33\nversionFromGit: 4.0.0-404-g3065f658\ngitTreeState: clean\nbuildDate: 2022-11-08T13:08:04Z\n" I1115 05:39:46.021134 1 metrics.go:169] metrics "msg"="router health and metrics port listening" "address"="0.0.0.0:1936" I1115 05:39:46.027361 1 router.go:191] template "msg"="creating a new template router" "writeDir"="/var/lib/haproxy" I1115 05:39:46.027408 1 router.go:273] template "msg"="router will coalesce reloads within an interval of each other" "interval"="5s" I1115 05:39:46.027791 1 router.go:343] template "msg"="watching for changes" "path"="/etc/pki/tls/private" I1115 05:39:46.027840 1 router.go:269] router "msg"="router is including routes in all namespaces" E1115 05:39:46.130949 1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory I1115 05:39:46.166262 1 router.go:618] template "msg"="router reloaded" "output"=" - Checking http://localhost:80 using PROXY protocol ...\n - Health check ok : 0 retry attempt(s).\n" I1115 05:39:51.162152 1 router.go:618] template "msg"="router reloaded" "output"=" - Checking http://localhost:80 using PROXY protocol ...\n - Health check ok : 0 retry attempt(s).\n" + kubectl logs --previous=true -n openshift-ingress pod/router-default-76b7657c68-6xcfc router Error from server (BadRequest): previous terminated container "router" in pod "router-default-76b7657c68-6xcfc" not found + true + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-kube-controller-manager -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-ovn-kubernetes -o name + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 Name: ovnkube-master-kdsb7 Namespace: openshift-ovn-kubernetes Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: ovn-kubernetes-controller Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:11 +0000 Labels: app=ovnkube-master component=network controller-revision-hash=54875d4d5c kubernetes.io/os=linux openshift.io/component=network ovn-db-pod=true pod-template-generation=1 type=infra Annotations: target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.0.0.2 IPs: IP: 10.0.0.2 Controlled By: DaemonSet/ovnkube-master Containers: northd: Container ID: cri-o://ead944ddf34591930a2dd893c8d869d22710452d650ed63cb987f8c8067c53bc Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Port: Host Port: Command: /bin/bash -c set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping ovn-northd" OVN_MANAGE_OVSDB=no /usr/share/ovn/scripts/ovn-ctl stop_northd echo "$(date -Iseconds) - ovn-northd stopped" rm -f /var/run/ovn/ovn-northd.pid exit 0 } # end of quit trap quit TERM INT echo "$(date -Iseconds) - starting ovn-northd" exec ovn-northd \ --no-chdir "-vconsole:${OVN_LOG_LEVEL}" -vfile:off "-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ --pidfile /var/run/ovn/ovn-northd.pid & wait $! State: Running Started: Tue, 15 Nov 2022 05:39:25 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 10Mi Environment: OVN_LOG_LEVEL: info Mounts: /env from env-overrides (rw) /run/openvswitch/ from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7qn5 (ro) nbdb: Container ID: cri-o://8ffa8d8bb0bce065da6545f64ac28109356a0580ca0e99d30eea5fd8938d6a57 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Port: Host Port: Command: /bin/bash -c set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping nbdb" /usr/share/ovn/scripts/ovn-ctl stop_nb_ovsdb echo "$(date -Iseconds) - nbdb stopped" rm -f /var/run/ovn/ovnnb_db.pid exit 0 } # end of quit trap quit TERM INT bracketify() { case "$1" in *:*) echo "[$1]" ;; *) echo "$1" ;; esac } compact() { sleep 15 while true; do /usr/bin/ovn-appctl -t /var/run/ovn/ovn${1}_db.ctl --timeout=5 ovsdb-server/compact 2>/dev/null || true sleep 600 done } # initialize variables db="nb" ovn_db_file="/etc/ovn/ovn${db}_db.db" OVN_ARGS="--db-nb-cluster-local-port=9643 --no-monitor" echo "$(date -Iseconds) - starting nbdb" exec /usr/share/ovn/scripts/ovn-ctl \ ${OVN_ARGS} \ --ovn-nb-log="-vconsole:${OVN_LOG_LEVEL} -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ run_nb_ovsdb & db_pid=$! compact $db & wait $db_pid State: Running Started: Tue, 15 Nov 2022 05:39:25 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 10Mi Readiness: exec [/bin/bash -c set -xeo pipefail /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=5 ovsdb-server/memory-trim-on-compaction on 2>/dev/null ] delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: OVN_LOG_LEVEL: info OVN_NORTHD_PROBE_INTERVAL: 5000 Mounts: /env from env-overrides (rw) /run/openvswitch/ from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7qn5 (ro) sbdb: Container ID: cri-o://9abd8f3ebb584beb37c8157ff3113b3f15b20b9a876800c646ec0963e0c3f19c Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Port: Host Port: Command: /bin/bash -c set -xem if [[ -f /env/_master ]]; then set -o allexport source /env/_master set +o allexport fi quit() { echo "$(date -Iseconds) - stopping sbdb" /usr/share/ovn/scripts/ovn-ctl stop_sb_ovsdb echo "$(date -Iseconds) - sbdb stopped" rm -f /var/run/ovn/ovnsb_db.pid exit 0 } # end of quit trap quit TERM INT bracketify() { case "$1" in *:*) echo "[$1]" ;; *) echo "$1" ;; esac } compact() { sleep 15 while true; do /usr/bin/ovn-appctl -t /var/run/ovn/ovn${1}_db.ctl --timeout=5 ovsdb-server/compact 2>/dev/null || true sleep 600 done } # initialize variables db="sb" ovn_db_file="/etc/ovn/ovn${db}_db.db" OVN_ARGS="--db-sb-cluster-local-port=9644 --no-monitor" echo "$(date -Iseconds) - starting sbdb " exec /usr/share/ovn/scripts/ovn-ctl \ ${OVN_ARGS} \ --ovn-sb-log="-vconsole:${OVN_LOG_LEVEL} -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ run_sb_ovsdb & db_pid=$! compact $db & wait $db_pid State: Running Started: Tue, 15 Nov 2022 05:39:26 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 10Mi Readiness: exec [/bin/bash -c set -xeo pipefail /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=5 ovsdb-server/memory-trim-on-compaction on 2>/dev/null ] delay=0s timeout=5s period=10s #success=1 #failure=3 Environment: OVN_LOG_LEVEL: info Mounts: /env from env-overrides (rw) /run/openvswitch/ from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7qn5 (ro) ovnkube-master: Container ID: cri-o://46e65cefb89093d7c2805fbae3f3761b12c9aaa5a9b49b69fe100e83e94a17e7 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Port: Host Port: Command: /bin/bash -c set -xe if [[ -f "/env/_master" ]]; then set -o allexport source "/env/_master" set +o allexport fi # K8S_NODE_IP triggers reconcilation of this daemon when node IP changes echo "$(date -Iseconds) - starting ovnkube-master, Node: ${K8S_NODE} IP: ${K8S_NODE_IP}" echo "I$(date "+%m%d %H:%M:%S.%N") - copy ovn-k8s-cni-overlay" cp -f /usr/libexec/cni/ovn-k8s-cni-overlay /cni-bin-dir/ echo "I$(date "+%m%d %H:%M:%S.%N") - disable conntrack on geneve port" iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK echo "I$(date "+%m%d %H:%M:%S.%N") - starting ovnkube-node" gateway_mode_flags="--gateway-mode local --gateway-interface br-ex" sysctl net.ipv4.ip_forward=1 gw_interface_flag= # if br-ex1 is configured on the node, we want to use it for external gateway traffic if [ -d /sys/class/net/br-ex1 ]; then gw_interface_flag="--exgw-interface=br-ex1" # the functionality depends on ip_forwarding being enabled fi echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-master - start ovnkube --init-master ${K8S_NODE} --init-node ${K8S_NODE}" exec /usr/bin/ovnkube \ --init-master "${K8S_NODE}" \ --init-node "${K8S_NODE}" \ --config-file=/run/ovnkube-config/ovnkube.conf \ --loglevel "${OVN_KUBE_LOG_LEVEL}" \ ${gateway_mode_flags} \ ${gw_interface_flag} \ --inactivity-probe="180000" \ --nb-address "" \ --sb-address "" \ --enable-multicast \ --disable-snat-multiple-gws \ --acl-logging-rate-limit "20" State: Running Started: Tue, 15 Nov 2022 05:39:26 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 60Mi Readiness: exec [test -f /etc/cni/net.d/10-ovn-kubernetes.conf] delay=5s timeout=1s period=5s #success=1 #failure=3 Environment: OVN_KUBE_LOG_LEVEL: 4 K8S_NODE: (v1:spec.nodeName) K8S_NODE_IP: (v1:status.hostIP) Mounts: /cni-bin-dir from host-cni-bin (rw) /dev/log from log-socket (rw) /env from env-overrides (rw) /etc/cni/net.d from host-cni-netd (rw) /etc/openvswitch from etc-openvswitch-node (rw) /etc/ovn/ from etc-openvswitch-node (rw) /etc/systemd/system from systemd-units (ro) /host from host-slash (ro) /run/netns from host-run-netns (ro) /run/openvswitch/ from run-openvswitch (rw) /run/ovn-kubernetes/ from host-run-ovn-kubernetes (rw) /run/ovn/ from run-ovn (rw) /run/ovnkube-config/ from ovnkube-config (rw) /var/lib/microshift/resources/kubeadmin from kubeconfig (rw) /var/log/ovn from node-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7qn5 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: systemd-units: Type: HostPath (bare host directory volume) Path: /etc/systemd/system HostPathType: run-openvswitch: Type: HostPath (bare host directory volume) Path: /var/run/openvswitch HostPathType: run-ovn: Type: HostPath (bare host directory volume) Path: /var/run/ovn HostPathType: host-slash: Type: HostPath (bare host directory volume) Path: / HostPathType: host-run-netns: Type: HostPath (bare host directory volume) Path: /run/netns HostPathType: etc-openvswitch-node: Type: HostPath (bare host directory volume) Path: /etc/openvswitch HostPathType: node-log: Type: HostPath (bare host directory volume) Path: /var/log/ovn HostPathType: log-socket: Type: HostPath (bare host directory volume) Path: /dev/log HostPathType: host-run-ovn-kubernetes: Type: HostPath (bare host directory volume) Path: /run/ovn-kubernetes HostPathType: host-cni-netd: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: host-cni-bin: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: kubeconfig: Type: HostPath (bare host directory volume) Path: /var/lib/microshift/resources/kubeadmin HostPathType: ovnkube-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: ovnkube-config Optional: false env-overrides: Type: ConfigMap (a volume populated by a ConfigMap) Name: env-overrides Optional: true kube-api-access-t7qn5: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m30s default-scheduler Successfully assigned openshift-ovn-kubernetes/ovnkube-master-kdsb7 to release-ci-ci-op-k5cwk1pv-7cb14 Normal Pulling 9m23s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" Normal Pulled 9m16s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" in 6.615139723s Normal Created 9m16s kubelet Created container northd Normal Started 9m16s kubelet Started container northd Normal Pulled 9m16s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" already present on machine Normal Created 9m16s kubelet Created container nbdb Normal Started 9m16s kubelet Started container nbdb Normal Pulled 9m15s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" already present on machine Normal Created 9m15s kubelet Created container sbdb Normal Started 9m15s kubelet Started container sbdb Normal Pulled 9m15s kubelet Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" already present on machine Normal Created 9m15s kubelet Created container ovnkube-master Normal Started 9m15s kubelet Started container ovnkube-master ++ kubectl get -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 northd + [[ -f /env/_master ]] + trap quit TERM INT ++ date -Iseconds + echo '2022-11-15T05:39:25+00:00 - starting ovn-northd' 2022-11-15T05:39:25+00:00 - starting ovn-northd + wait 62836 + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid 2022-11-15T05:39:25.768Z|00001|ovn_northd|INFO|OVN internal version is : [22.06.1-20.23.0-63.4] 2022-11-15T05:39:25.769Z|00002|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2022-11-15T05:39:25.769Z|00003|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connection attempt failed (No such file or directory) 2022-11-15T05:39:25.769Z|00004|ovn_northd|INFO|OVN NB IDL reconnected, force recompute. 2022-11-15T05:39:25.769Z|00005|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2022-11-15T05:39:25.769Z|00006|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connection attempt failed (No such file or directory) 2022-11-15T05:39:25.769Z|00007|ovn_northd|INFO|OVN SB IDL reconnected, force recompute. 2022-11-15T05:39:26.770Z|00008|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connecting... 2022-11-15T05:39:26.770Z|00009|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2022-11-15T05:39:26.770Z|00010|reconnect|INFO|unix:/var/run/ovn/ovnnb_db.sock: connected 2022-11-15T05:39:26.770Z|00011|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2022-11-15T05:39:26.770Z|00012|ovn_northd|INFO|ovn-northd lock acquired. This ovn-northd instance is now active. 2022-11-15T05:39:39.095Z|00013|memory|INFO|9584 kB peak resident set size after 13.3 seconds + kubectl logs --previous=true -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 northd Error from server (BadRequest): previous terminated container "northd" in pod "ovnkube-master-kdsb7" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 nbdb + [[ -f /env/_master ]] + trap quit TERM INT + db=nb + ovn_db_file=/etc/ovn/ovnnb_db.db + OVN_ARGS='--db-nb-cluster-local-port=9643 --no-monitor' ++ date -Iseconds 2022-11-15T05:39:25+00:00 - starting nbdb + echo '2022-11-15T05:39:25+00:00 - starting nbdb' + db_pid=62880 + wait 62880 + exec /usr/share/ovn/scripts/ovn-ctl --db-nb-cluster-local-port=9643 --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb + compact nb + sleep 15 /etc/ovn/ovnnb_db.db does not exist ... (warning). Creating empty database /etc/ovn/ovnnb_db.db. 2022-11-15T05:39:25.984Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-nb.log 2022-11-15T05:39:25.987Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.17.3 2022-11-15T05:39:26.628Z|00003|jsonrpc|WARN|unix#3: send error: Broken pipe 2022-11-15T05:39:26.628Z|00004|reconnect|WARN|unix#3: connection dropped (Broken pipe) 2022-11-15T05:39:27.936Z|00005|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:39:28.520Z|00006|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:39:35.991Z|00007|memory|INFO|34680 kB peak resident set size after 10.0 seconds 2022-11-15T05:39:35.991Z|00008|memory|INFO|atoms:676 cells:762 monitors:4 sessions:2 2022-11-15T05:39:38.536Z|00009|ovsdb_server|INFO|memory trimming after compaction enabled. + true + /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=5 ovsdb-server/compact 2022-11-15T05:39:40.914Z|00010|ovsdb_server|INFO|compacting OVN_Northbound database by user request + sleep 600 2022-11-15T05:39:48.576Z|00011|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:39:58.525Z|00012|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:08.532Z|00013|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:18.533Z|00014|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:28.523Z|00015|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:38.526Z|00016|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:48.525Z|00017|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:58.521Z|00018|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:08.521Z|00019|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:18.516Z|00020|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:28.526Z|00021|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:38.530Z|00022|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:48.525Z|00023|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:58.526Z|00024|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:08.532Z|00025|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:18.535Z|00026|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:28.525Z|00027|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:38.527Z|00028|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:48.520Z|00029|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:58.527Z|00030|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:08.525Z|00031|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:18.524Z|00032|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:28.519Z|00033|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:38.523Z|00034|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:48.528Z|00035|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:58.524Z|00036|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:08.527Z|00037|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:18.529Z|00038|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:28.527Z|00039|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:38.521Z|00040|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:48.525Z|00041|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:58.531Z|00042|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:08.529Z|00043|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:18.525Z|00044|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:28.525Z|00045|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:38.519Z|00046|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:48.527Z|00047|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:58.524Z|00048|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:08.531Z|00049|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:18.523Z|00050|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:28.526Z|00051|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:38.526Z|00052|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:48.525Z|00053|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:58.520Z|00054|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:08.524Z|00055|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:18.528Z|00056|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:28.521Z|00057|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:38.526Z|00058|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:48.529Z|00059|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:58.527Z|00060|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:08.523Z|00061|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:18.525Z|00062|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:28.525Z|00063|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:38.522Z|00064|ovsdb_server|INFO|memory trimming after compaction enabled. + kubectl logs --previous=true -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 nbdb Error from server (BadRequest): previous terminated container "nbdb" in pod "ovnkube-master-kdsb7" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 sbdb + [[ -f /env/_master ]] + trap quit TERM INT + db=sb + ovn_db_file=/etc/ovn/ovnsb_db.db + OVN_ARGS='--db-sb-cluster-local-port=9644 --no-monitor' ++ date -Iseconds + echo '2022-11-15T05:39:26+00:00 - starting sbdb ' 2022-11-15T05:39:26+00:00 - starting sbdb + db_pid=62977 + wait 62977 + exec /usr/share/ovn/scripts/ovn-ctl --db-sb-cluster-local-port=9644 --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb + compact sb + sleep 15 /etc/ovn/ovnsb_db.db does not exist ... (warning). Creating empty database /etc/ovn/ovnsb_db.db. 2022-11-15T05:39:26.245Z|00001|vlog|INFO|opened log file /var/log/ovn/ovsdb-server-sb.log 2022-11-15T05:39:26.248Z|00002|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.17.3 2022-11-15T05:39:27.337Z|00003|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:39:36.253Z|00004|memory|INFO|36184 kB peak resident set size after 10.0 seconds 2022-11-15T05:39:36.253Z|00005|memory|INFO|atoms:5754 cells:5334 monitors:5 sessions:3 2022-11-15T05:39:37.325Z|00006|ovsdb_server|INFO|memory trimming after compaction enabled. + true + /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=5 ovsdb-server/compact 2022-11-15T05:39:41.163Z|00007|ovsdb_server|INFO|compacting OVN_Southbound database by user request + sleep 600 2022-11-15T05:39:47.324Z|00008|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:39:57.312Z|00009|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:07.336Z|00010|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:17.320Z|00011|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:27.317Z|00012|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:37.322Z|00013|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:47.319Z|00014|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:40:57.323Z|00015|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:07.324Z|00016|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:17.321Z|00017|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:27.321Z|00018|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:37.326Z|00019|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:47.321Z|00020|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:41:57.321Z|00021|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:07.317Z|00022|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:17.332Z|00023|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:27.321Z|00024|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:37.318Z|00025|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:47.317Z|00026|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:42:57.324Z|00027|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:07.325Z|00028|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:17.322Z|00029|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:27.318Z|00030|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:37.324Z|00031|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:47.318Z|00032|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:43:57.317Z|00033|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:07.322Z|00034|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:17.319Z|00035|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:27.322Z|00036|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:37.319Z|00037|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:47.325Z|00038|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:44:57.319Z|00039|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:07.333Z|00040|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:17.322Z|00041|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:27.321Z|00042|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:37.323Z|00043|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:47.319Z|00044|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:45:57.323Z|00045|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:07.321Z|00046|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:17.318Z|00047|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:27.318Z|00048|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:37.320Z|00049|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:47.317Z|00050|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:46:57.315Z|00051|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:07.325Z|00052|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:17.325Z|00053|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:27.322Z|00054|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:37.326Z|00055|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:47.319Z|00056|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:47:57.324Z|00057|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:07.324Z|00058|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:17.326Z|00059|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:27.315Z|00060|ovsdb_server|INFO|memory trimming after compaction enabled. 2022-11-15T05:48:37.317Z|00061|ovsdb_server|INFO|memory trimming after compaction enabled. + kubectl logs --previous=true -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 sbdb Error from server (BadRequest): previous terminated container "sbdb" in pod "ovnkube-master-kdsb7" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 ovnkube-master + [[ -f /env/_master ]] ++ date -Iseconds 2022-11-15T05:39:26+00:00 - starting ovnkube-master, Node: release-ci-ci-op-k5cwk1pv-7cb14 IP: 10.0.0.2 + echo '2022-11-15T05:39:26+00:00 - starting ovnkube-master, Node: release-ci-ci-op-k5cwk1pv-7cb14 IP: 10.0.0.2' ++ date '+%m%d %H:%M:%S.%N' + echo 'I1115 05:39:26.353992605 - copy ovn-k8s-cni-overlay' I1115 05:39:26.353992605 - copy ovn-k8s-cni-overlay + cp -f /usr/libexec/cni/ovn-k8s-cni-overlay /cni-bin-dir/ ++ date '+%m%d %H:%M:%S.%N' I1115 05:39:26.382888597 - disable conntrack on geneve port + echo 'I1115 05:39:26.382888597 - disable conntrack on geneve port' + iptables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK + iptables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK + ip6tables -t raw -A PREROUTING -p udp --dport 6081 -j NOTRACK + ip6tables -t raw -A OUTPUT -p udp --dport 6081 -j NOTRACK ++ date '+%m%d %H:%M:%S.%N' + echo 'I1115 05:39:26.415376785 - starting ovnkube-node' I1115 05:39:26.415376785 - starting ovnkube-node + gateway_mode_flags='--gateway-mode local --gateway-interface br-ex' + sysctl net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1 + gw_interface_flag= + '[' -d /sys/class/net/br-ex1 ']' ++ date '+%m%d %H:%M:%S.%N' I1115 05:39:26.419832060 - ovnkube-master - start ovnkube --init-master release-ci-ci-op-k5cwk1pv-7cb14 --init-node release-ci-ci-op-k5cwk1pv-7cb14 + echo 'I1115 05:39:26.419832060 - ovnkube-master - start ovnkube --init-master release-ci-ci-op-k5cwk1pv-7cb14 --init-node release-ci-ci-op-k5cwk1pv-7cb14' + exec /usr/bin/ovnkube --init-master release-ci-ci-op-k5cwk1pv-7cb14 --init-node release-ci-ci-op-k5cwk1pv-7cb14 --config-file=/run/ovnkube-config/ovnkube.conf --loglevel 4 --gateway-mode local --gateway-interface br-ex --inactivity-probe=180000 --nb-address '' --sb-address '' --enable-multicast --disable-snat-multiple-gws --acl-logging-rate-limit 20 I1115 05:39:26.562593 63045 ovs.go:90] Maximum command line arguments set to: 191102 I1115 05:39:26.565062 63045 config.go:1802] Parsed config file /run/ovnkube-config/ovnkube.conf I1115 05:39:26.565081 63045 config.go:1803] Parsed config: {Default:{MTU:1400 RoutableMTU:0 ConntrackZone:64000 EncapType:geneve EncapIP: EncapPort:6081 InactivityProbe:100000 OpenFlowProbe:180 OfctrlWaitBeforeClear:0 MonitorAll:true LFlowCacheEnable:false LFlowCacheLimit:0 LFlowCacheLimitKb:870 RawClusterSubnets:10.42.0.0/16 ClusterSubnets:[] EnableUDPAggregation:false} Logging:{File: CNIFile: Level:4 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:5 ACLLoggingRateLimit:20} Monitoring:{RawNetFlowTargets: RawSFlowTargets: RawIPFIXTargets: NetFlowTargets:[] SFlowTargets:[] IPFIXTargets:[]} IPFIX:{Sampling:400 CacheActiveTimeout:60 CacheMaxFlows:0} CNI:{ConfDir:/etc/cni/net.d Plugin:ovn-k8s-cni-overlay} OVNKubernetesFeature:{EnableEgressIP:false EgressIPReachabiltyTotalTimeout:1 EnableEgressFirewall:false EnableEgressQoS:false EgressIPNodeHealthCheckPort:0} Kubernetes:{Kubeconfig:/var/lib/microshift/resources/kubeadmin/kubeconfig CACert: CAData:[] APIServer:http://localhost:8443 Token: TokenFile: CompatServiceCIDR: RawServiceCIDRs:10.43.0.0/16 ServiceCIDRs:[] OVNConfigNamespace:openshift-ovn-kubernetes OVNEmptyLbEvents:false PodIP: RawNoHostSubnetNodes: NoHostSubnetNodes:nil HostNetworkNamespace:openshift-host-network PlatformType:BareMetal CompatMetricsBindAddress: CompatOVNMetricsBindAddress: CompatMetricsEnablePprof:false} Metrics:{BindAddress: OVNMetricsBindAddress: ExportOVSMetrics:false EnablePprof:false NodeServerPrivKey: NodeServerCert: EnableConfigDuration:false EnableEIPScaleMetrics:false} OvnNorth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} OvnSouth:{Address: PrivKey: Cert: CACert: CertCommonName: Scheme: ElectionTimer:0 northbound:false exec:} Gateway:{Mode:local Interface: EgressGWInterface: NextHop: VLANID:0 NodeportEnable:true DisableSNATMultipleGWs:false V4JoinSubnet:100.64.0.0/16 V6JoinSubnet:fd98::/64 DisablePacketMTUCheck:false RouterSubnet:} MasterHA:{ElectionLeaseDuration:137 ElectionRenewDeadline:107 ElectionRetryPeriod:26} HybridOverlay:{Enabled:false RawClusterSubnets: ClusterSubnets:[] VXLANPort:4789} OvnKubeNode:{Mode:full MgmtPortNetdev: DisableOVNIfaceIdVer:false}} I1115 05:39:26.568144 63045 client.go:325] "msg"="trying to connect" "database"="OVN_Northbound" "endpoint"="unix:/var/run/ovn/ovnnb_db.sock" I1115 05:39:26.575167 63045 client.go:783] "msg"="transacting operations" "database"="_Server" "operations"="[{Op:select Table:Database Row:map[] Rows:[] Columns:[name model leader sid] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.575474 63045 client.go:260] "msg"="successfully connected" "database"="OVN_Northbound" "endpoint"="unix:/var/run/ovn/ovnnb_db.sock" "sid"="" I1115 05:39:26.577208 63045 client.go:325] "msg"="trying to connect" "database"="OVN_Southbound" "endpoint"="unix:/var/run/ovn/ovnsb_db.sock" I1115 05:39:26.580145 63045 client.go:783] "msg"="transacting operations" "database"="_Server" "operations"="[{Op:select Table:Database Row:map[] Rows:[] Columns:[name model leader sid] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.580380 63045 client.go:260] "msg"="successfully connected" "database"="OVN_Southbound" "endpoint"="unix:/var/run/ovn/ovnsb_db.sock" "sid"="" I1115 05:39:26.581689 63045 services_controller.go:56] Creating event broadcaster I1115 05:39:26.581744 63045 services_controller.go:67] Setting up event handlers for services I1115 05:39:26.581779 63045 services_controller.go:77] Setting up event handlers for endpoint slices I1115 05:39:26.581811 63045 egress_services_controller.go:101] Setting up event handlers for Egress Services I1115 05:39:26.582020 63045 node.go:328] OVN Kube Node initialization, Mode: full I1115 05:39:26.582085 63045 leaderelection.go:248] attempting to acquire leader lease openshift-ovn-kubernetes/ovn-kubernetes-master... I1115 05:39:26.582875 63045 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.582894 63045 reflector.go:255] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.583318 63045 reflector.go:219] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.583331 63045 reflector.go:255] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.583773 63045 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.583786 63045 reflector.go:255] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.584280 63045 reflector.go:219] Starting reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.584293 63045 reflector.go:255] Listing and watching *v1.NetworkPolicy from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.584745 63045 reflector.go:219] Starting reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.584756 63045 reflector.go:255] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.585310 63045 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.585325 63045 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.614074 63045 leaderelection.go:258] successfully acquired lease openshift-ovn-kubernetes/ovn-kubernetes-master I1115 05:39:26.614225 63045 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"19006cd7-d12b-4cc4-89e8-caff769b1922", APIVersion:"v1", ResourceVersion:"542", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' release-ci-ci-op-k5cwk1pv-7cb14 became leader I1115 05:39:26.614246 63045 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-ovn-kubernetes", Name:"ovn-kubernetes-master", UID:"997aece6-a701-4059-b69b-d9bd97fa23f5", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"544", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' release-ci-ci-op-k5cwk1pv-7cb14 became leader I1115 05:39:26.614261 63045 master.go:96] Won leader election; in active mode I1115 05:39:26.614281 63045 master.go:230] Starting cluster master I1115 05:39:26.614593 63045 model_client.go:354] Update operations generated as: [{Op:update Table:NB_Global Row:map[options:{GoMap:map[northd_probe_interval:5000 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.614638 63045 transact.go:41] Configuring OVN: [{Op:update Table:NB_Global Row:map[options:{GoMap:map[northd_probe_interval:5000 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.614683 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[northd_probe_interval:5000 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.615323 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="NB_Global" "uuid"="975338d6-246c-4dca-9a4f-b865d4f805e9" I1115 05:39:26.615388 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="NB_Global" "uuid"="975338d6-246c-4dca-9a4f-b865d4f805e9" "new"="&{UUID:975338d6-246c-4dca-9a4f-b865d4f805e9 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name: NbCfg:0 NbCfgTimestamp:0 Options:map[northd_probe_interval:5000 use_logical_dp_groups:true] SbCfg:0 SbCfgTimestamp:0 SSL:}" "old"="&{UUID:975338d6-246c-4dca-9a4f-b865d4f805e9 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name: NbCfg:0 NbCfgTimestamp:0 Options:map[northd_probe_interval:5000] SbCfg:0 SbCfgTimestamp:0 SSL:}" I1115 05:39:26.616530 63045 master.go:262] Existing number of nodes: 1 I1115 05:39:26.616560 63045 master.go:269] Allocating subnets I1115 05:39:26.616567 63045 master.go:276] Added network range 10.42.0.0/16 to the allocator I1115 05:39:26.616583 63045 ovs.go:200] Exec(1): /usr/bin/ovn-sbctl --timeout=15 --no-leader-only --columns=_uuid list IGMP_Group I1115 05:39:26.622587 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="SB_Global" "uuid"="a2e15b8c-b327-4492-aba9-203561da312a" I1115 05:39:26.622631 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="SB_Global" "uuid"="a2e15b8c-b327-4492-aba9-203561da312a" "model"="&{UUID:a2e15b8c-b327-4492-aba9-203561da312a Connections:[] ExternalIDs:map[] Ipsec:false NbCfg:0 Options:map[] SSL:}" I1115 05:39:26.623218 63045 ovs.go:203] Exec(1): stdout: "" I1115 05:39:26.623238 63045 ovs.go:204] Exec(1): stderr: "" I1115 05:39:26.623302 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter_Band Row:map[action:drop rate:20] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996163}] I1115 05:39:26.623344 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996163}]} fair:{GoSet:[true]} name:acl-logging unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996164}] I1115 05:39:26.623370 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Meter_Band Row:map[action:drop rate:20] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996163} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996163}]} fair:{GoSet:[true]} name:acl-logging unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996164}] I1115 05:39:26.623419 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Meter_Band Row:map[action:drop rate:20] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996163} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996163}]} fair:{GoSet:[true]} name:acl-logging unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996164}]" I1115 05:39:26.623857 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="3b5fadfc-c377-4792-81b7-7d6fbf8e7051" I1115 05:39:26.623898 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="3b5fadfc-c377-4792-81b7-7d6fbf8e7051" "model"="&{UUID:3b5fadfc-c377-4792-81b7-7d6fbf8e7051 Bands:[e23508e3-51bc-4edc-94b7-7d237eda32ef] ExternalIDs:map[] Fair:0xc00048e0d0 Name:acl-logging Unit:pktps}" I1115 05:39:26.623930 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter_Band" "uuid"="e23508e3-51bc-4edc-94b7-7d237eda32ef" I1115 05:39:26.623952 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter_Band" "uuid"="e23508e3-51bc-4edc-94b7-7d237eda32ef" "model"="&{UUID:e23508e3-51bc-4edc-94b7-7d237eda32ef Action:drop BurstSize:0 ExternalIDs:map[] Rate:20}" I1115 05:39:26.623995 63045 ovs.go:200] Exec(2): /usr/bin/ovn-nbctl --timeout=15 --columns=_uuid list Load_Balancer_Group I1115 05:39:26.628573 63045 ovs.go:203] Exec(2): stdout: "" I1115 05:39:26.628592 63045 ovs.go:204] Exec(2): stderr: "" I1115 05:39:26.628631 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996165}] I1115 05:39:26.628650 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996165}] I1115 05:39:26.628687 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Load_Balancer_Group Row:map[name:clusterLBGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996165}]" I1115 05:39:26.629101 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer_Group" "uuid"="7b81a844-05a7-4d75-90db-fc377eeda1a5" I1115 05:39:26.629128 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer_Group" "uuid"="7b81a844-05a7-4d75-90db-fc377eeda1a5" "model"="&{UUID:7b81a844-05a7-4d75-90db-fc377eeda1a5 LoadBalancer:[] Name:clusterLBGroup}" I1115 05:39:26.629219 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router Row:map[external_ids:{GoMap:map[k8s-cluster-router:yes]} name:ovn_cluster_router options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996166}] I1115 05:39:26.629235 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Router Row:map[] Rows:[map[name:ovn_cluster_router]] Columns:[name] Mutations:[] Timeout:0xc00048ec30 Where:[where column name == ovn_cluster_router] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router Row:map[external_ids:{GoMap:map[k8s-cluster-router:yes]} name:ovn_cluster_router options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996166}] I1115 05:39:26.629293 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Router Row:map[] Rows:[map[name:ovn_cluster_router]] Columns:[name] Mutations:[] Timeout:0xc00048ec30 Where:[where column name == ovn_cluster_router] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router Row:map[external_ids:{GoMap:map[k8s-cluster-router:yes]} name:ovn_cluster_router options:{GoMap:map[mcast_relay:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996166}]" I1115 05:39:26.629782 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.629819 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "model"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[] StaticRoutes:[]}" I1115 05:39:26.629858 63045 ovs.go:200] Exec(3): /usr/bin/ovsdb-client list-columns unix:/var/run/ovn/ovnnb_db.sock --data=bare --no-heading --format=json OVN_Northbound Load_Balancer I1115 05:39:26.639247 63045 leaderelection.go:278] successfully renewed lease openshift-ovn-kubernetes/ovn-kubernetes-master I1115 05:39:26.640815 63045 ovs.go:203] Exec(3): stdout: "{\"data\":[[\"_version\",\"uuid\"],[\"name\",\"string\"],[\"health_check\",{\"key\":{\"refTable\":\"Load_Balancer_Health_Check\",\"type\":\"uuid\"},\"max\":\"unlimited\",\"min\":0}],[\"protocol\",{\"key\":{\"enum\":[\"set\",[\"sctp\",\"tcp\",\"udp\"]],\"type\":\"string\"},\"min\":0}],[\"selection_fields\",{\"key\":{\"enum\":[\"set\",[\"eth_dst\",\"eth_src\",\"ip_dst\",\"ip_src\",\"tp_dst\",\"tp_src\"]],\"type\":\"string\"},\"max\":\"unlimited\",\"min\":0}],[\"options\",{\"key\":\"string\",\"max\":\"unlimited\",\"min\":0,\"value\":\"string\"}],[\"external_ids\",{\"key\":\"string\",\"max\":\"unlimited\",\"min\":0,\"value\":\"string\"}],[\"ip_port_mappings\",{\"key\":\"string\",\"max\":\"unlimited\",\"min\":0,\"value\":\"string\"}],[\"_uuid\",\"uuid\"],[\"vips\",{\"key\":\"string\",\"max\":\"unlimited\",\"min\":0,\"value\":\"string\"}]],\"headings\":[\"Column\",\"Type\"]}\n" I1115 05:39:26.640846 63045 ovs.go:204] Exec(3): stderr: "" I1115 05:39:26.640945 63045 master.go:367] SCTP support detected in OVN I1115 05:39:26.641008 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Port_Group Row:map[external_ids:{GoMap:map[name:clusterPortGroup]} name:clusterPortGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996167}] I1115 05:39:26.641030 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Port_Group Row:map[external_ids:{GoMap:map[name:clusterPortGroup]} name:clusterPortGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996167}] I1115 05:39:26.641067 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Port_Group Row:map[external_ids:{GoMap:map[name:clusterPortGroup]} name:clusterPortGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996167}]" I1115 05:39:26.641409 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="69f41db0-3b67-40ce-a811-31a29b2cc642" I1115 05:39:26.641451 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="69f41db0-3b67-40ce-a811-31a29b2cc642" "model"="&{UUID:69f41db0-3b67-40ce-a811-31a29b2cc642 ACLs:[] ExternalIDs:map[name:clusterPortGroup] Name:clusterPortGroup Ports:[]}" I1115 05:39:26.641518 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Port_Group Row:map[external_ids:{GoMap:map[name:clusterRtrPortGroup]} name:clusterRtrPortGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996168}] I1115 05:39:26.641537 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Port_Group Row:map[external_ids:{GoMap:map[name:clusterRtrPortGroup]} name:clusterRtrPortGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996168}] I1115 05:39:26.641566 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Port_Group Row:map[external_ids:{GoMap:map[name:clusterRtrPortGroup]} name:clusterRtrPortGroup] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996168}]" I1115 05:39:26.641797 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="e5a23782-d9b0-4a5b-88e3-9e3f972e43f2" I1115 05:39:26.641830 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="e5a23782-d9b0-4a5b-88e3-9e3f972e43f2" "model"="&{UUID:e5a23782-d9b0-4a5b-88e3-9e3f972e43f2 ACLs:[] ExternalIDs:map[name:clusterRtrPortGroup] Name:clusterRtrPortGroup Ports:[]}" I1115 05:39:26.641895 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterPortGroup_DefaultDenyMulticastEgress]} options:{GoMap:map[apply-after-lb:true]} priority:1011] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996169}] I1115 05:39:26.641935 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterPortGroup_DefaultDenyMulticastIngress]} priority:1011] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996170}] I1115 05:39:26.641977 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996169} {GoUUID:u2596996170}]}}] Timeout: Where:[where column _uuid == {69f41db0-3b67-40ce-a811-31a29b2cc642}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.642008 63045 transact.go:41] Configuring OVN: [{Op:insert Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterPortGroup_DefaultDenyMulticastEgress]} options:{GoMap:map[apply-after-lb:true]} priority:1011] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996169} {Op:insert Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterPortGroup_DefaultDenyMulticastIngress]} priority:1011] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996170} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996169} {GoUUID:u2596996170}]}}] Timeout: Where:[where column _uuid == {69f41db0-3b67-40ce-a811-31a29b2cc642}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.642080 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterPortGroup_DefaultDenyMulticastEgress]} options:{GoMap:map[apply-after-lb:true]} priority:1011] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996169} {Op:insert Table:ACL Row:map[action:drop direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterPortGroup_DefaultDenyMulticastIngress]} priority:1011] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996170} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996169} {GoUUID:u2596996170}]}}] Timeout: Where:[where column _uuid == {69f41db0-3b67-40ce-a811-31a29b2cc642}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.642597 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="494a3f8a-8a8c-4041-bce9-4640c20a3f3c" I1115 05:39:26.642644 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="ACL" "uuid"="494a3f8a-8a8c-4041-bce9-4640c20a3f3c" "model"="&{UUID:494a3f8a-8a8c-4041-bce9-4640c20a3f3c Action:drop Direction:to-lport ExternalIDs:map[default-deny-policy-type:Ingress] Label:0 Log:false Match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) Meter:0xc0003c44c0 Name:0xc0003c4500 Options:map[] Priority:1011 Severity:}" I1115 05:39:26.642667 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="eb735fe2-016e-43d2-bbd3-0295cc799cd3" I1115 05:39:26.642693 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="ACL" "uuid"="eb735fe2-016e-43d2-bbd3-0295cc799cd3" "model"="&{UUID:eb735fe2-016e-43d2-bbd3-0295cc799cd3 Action:drop Direction:from-lport ExternalIDs:map[default-deny-policy-type:Egress] Label:0 Log:false Match:(ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) Meter:0xc0003c4d70 Name:0xc0003c4da0 Options:map[apply-after-lb:true] Priority:1011 Severity:}" I1115 05:39:26.642712 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="69f41db0-3b67-40ce-a811-31a29b2cc642" I1115 05:39:26.642745 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="69f41db0-3b67-40ce-a811-31a29b2cc642" "new"="&{UUID:69f41db0-3b67-40ce-a811-31a29b2cc642 ACLs:[494a3f8a-8a8c-4041-bce9-4640c20a3f3c eb735fe2-016e-43d2-bbd3-0295cc799cd3] ExternalIDs:map[name:clusterPortGroup] Name:clusterPortGroup Ports:[]}" "old"="&{UUID:69f41db0-3b67-40ce-a811-31a29b2cc642 ACLs:[] ExternalIDs:map[name:clusterPortGroup] Name:clusterPortGroup Ports:[]}" I1115 05:39:26.642814 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterRtrPortGroup_DefaultAllowMulticastEgress]} options:{GoMap:map[apply-after-lb:true]} priority:1012] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996171}] I1115 05:39:26.642856 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterRtrPortGroup_DefaultAllowMulticastIngress]} priority:1012] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996172}] I1115 05:39:26.642894 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996171} {GoUUID:u2596996172}]}}] Timeout: Where:[where column _uuid == {e5a23782-d9b0-4a5b-88e3-9e3f972e43f2}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.642912 63045 transact.go:41] Configuring OVN: [{Op:insert Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterRtrPortGroup_DefaultAllowMulticastEgress]} options:{GoMap:map[apply-after-lb:true]} priority:1012] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996171} {Op:insert Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterRtrPortGroup_DefaultAllowMulticastIngress]} priority:1012] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996172} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996171} {GoUUID:u2596996172}]}}] Timeout: Where:[where column _uuid == {e5a23782-d9b0-4a5b-88e3-9e3f972e43f2}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.642992 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:ACL Row:map[action:allow direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterRtrPortGroup_DefaultAllowMulticastEgress]} options:{GoMap:map[apply-after-lb:true]} priority:1012] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996171} {Op:insert Table:ACL Row:map[action:allow direction:to-lport external_ids:{GoMap:map[default-deny-policy-type:Ingress]} log:false match:outport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) meter:{GoSet:[acl-logging]} name:{GoSet:[clusterRtrPortGroup_DefaultAllowMulticastIngress]} priority:1012] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996172} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996171} {GoUUID:u2596996172}]}}] Timeout: Where:[where column _uuid == {e5a23782-d9b0-4a5b-88e3-9e3f972e43f2}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.643443 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="1c8e57af-0f1f-4634-bcb0-537ca919af96" I1115 05:39:26.643486 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="ACL" "uuid"="1c8e57af-0f1f-4634-bcb0-537ca919af96" "model"="&{UUID:1c8e57af-0f1f-4634-bcb0-537ca919af96 Action:allow Direction:to-lport ExternalIDs:map[default-deny-policy-type:Ingress] Label:0 Log:false Match:outport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) Meter:0xc0003e6b50 Name:0xc0003e6b90 Options:map[] Priority:1012 Severity:}" I1115 05:39:26.643528 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="99f0603a-ca15-45fa-9665-4d093f4e20ae" I1115 05:39:26.643552 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="ACL" "uuid"="99f0603a-ca15-45fa-9665-4d093f4e20ae" "model"="&{UUID:99f0603a-ca15-45fa-9665-4d093f4e20ae Action:allow Direction:from-lport ExternalIDs:map[default-deny-policy-type:Egress] Label:0 Log:false Match:inport == @clusterRtrPortGroup && (ip4.mcast || mldv1 || mldv2 || (ip6.dst[120..127] == 0xff && ip6.dst[116] == 1)) Meter:0xc0003e7190 Name:0xc0003e71b0 Options:map[apply-after-lb:true] Priority:1012 Severity:}" I1115 05:39:26.643568 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="e5a23782-d9b0-4a5b-88e3-9e3f972e43f2" I1115 05:39:26.643598 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="e5a23782-d9b0-4a5b-88e3-9e3f972e43f2" "new"="&{UUID:e5a23782-d9b0-4a5b-88e3-9e3f972e43f2 ACLs:[1c8e57af-0f1f-4634-bcb0-537ca919af96 99f0603a-ca15-45fa-9665-4d093f4e20ae] ExternalIDs:map[name:clusterRtrPortGroup] Name:clusterRtrPortGroup Ports:[]}" "old"="&{UUID:e5a23782-d9b0-4a5b-88e3-9e3f972e43f2 ACLs:[] ExternalIDs:map[name:clusterRtrPortGroup] Name:clusterRtrPortGroup Ports:[]}" I1115 05:39:26.643676 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996173}] I1115 05:39:26.643692 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Switch Row:map[] Rows:[map[name:join]] Columns:[name] Mutations:[] Timeout:0xc0008c7488 Where:[where column name == join] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996173}] I1115 05:39:26.643726 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Switch Row:map[] Rows:[map[name:join]] Columns:[name] Mutations:[] Timeout:0xc0008c7488 Where:[where column name == join] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch Row:map[name:join] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996173}]" I1115 05:39:26.643952 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="4a7292ab-0326-4438-83d1-4c6f4765fce0" I1115 05:39:26.643993 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="4a7292ab-0326-4438-83d1-4c6f4765fce0" "model"="&{UUID:4a7292ab-0326-4438-83d1-4c6f4765fce0 ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:join OtherConfig:map[] Ports:[] QOSRules:[]}" W1115 05:39:26.644040 63045 logical_switch_manager.go:590] Failed to get IPs for logical router port rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 W1115 05:39:26.644055 63045 logical_switch_manager.go:590] Failed to get IPs for logical router port rtoj-GR_ovn_cluster_router I1115 05:39:26.644082 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 name:rtoj-ovn_cluster_router networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996174}] I1115 05:39:26.644122 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996174}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.644138 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 name:rtoj-ovn_cluster_router networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996174} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996174}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.644181 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:01 name:rtoj-ovn_cluster_router networks:{GoSet:[100.64.0.1/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996174} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996174}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.644450 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="d132d1a8-41f8-430a-a203-0d05cafce999" I1115 05:39:26.644488 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="d132d1a8-41f8-430a-a203-0d05cafce999" "model"="&{UUID:d132d1a8-41f8-430a-a203-0d05cafce999 Enabled: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: Ipv6Prefix:[] Ipv6RaConfigs:map[] MAC:0a:58:64:40:00:01 Name:rtoj-ovn_cluster_router Networks:[100.64.0.1/16] Options:map[] Peer:}" I1115 05:39:26.644556 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.644593 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999] StaticRoutes:[]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[] StaticRoutes:[]}" I1115 05:39:26.644657 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:jtor-ovn_cluster_router options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996175}] I1115 05:39:26.644697 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996175}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.644710 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:jtor-ovn_cluster_router options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996175} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996175}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.644758 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:jtor-ovn_cluster_router options:{GoMap:map[router-port:rtoj-ovn_cluster_router]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996175} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996175}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.645048 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="4a7292ab-0326-4438-83d1-4c6f4765fce0" I1115 05:39:26.645096 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="4a7292ab-0326-4438-83d1-4c6f4765fce0" "new"="&{UUID:4a7292ab-0326-4438-83d1-4c6f4765fce0 ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:join OtherConfig:map[] Ports:[402f3397-e1d9-4671-bc59-c4e9a435a625] QOSRules:[]}" "old"="&{UUID:4a7292ab-0326-4438-83d1-4c6f4765fce0 ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:join OtherConfig:map[] Ports:[] QOSRules:[]}" I1115 05:39:26.645109 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="402f3397-e1d9-4671-bc59-c4e9a435a625" I1115 05:39:26.645143 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="402f3397-e1d9-4671-bc59-c4e9a435a625" "model"="&{UUID:402f3397-e1d9-4671-bc59-c4e9a435a625 Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:jtor-ovn_cluster_router Options:map[router-port:rtoj-ovn_cluster_router] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:26.645200 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter_Band Row:map[action:drop rate:25] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996176}] I1115 05:39:26.645231 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:arp-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996177}] I1115 05:39:26.645254 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:arp-resolve-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996178}] I1115 05:39:26.645283 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:bfd-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996179}] I1115 05:39:26.645303 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:event-elb-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996180}] I1115 05:39:26.645326 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:icmp4-error-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996181}] I1115 05:39:26.645348 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:icmp6-error-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996182}] I1115 05:39:26.645367 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:reject-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996183}] I1115 05:39:26.645385 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:tcp-reset-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996184}] I1115 05:39:26.645412 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996185}] I1115 05:39:26.645432 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Meter_Band Row:map[action:drop rate:25] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996176} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:arp-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996177} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:arp-resolve-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996178} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:bfd-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996179} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:event-elb-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996180} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:icmp4-error-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996181} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:icmp6-error-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996182} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:reject-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996183} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:tcp-reset-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996184} {Op:insert Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996185}] I1115 05:39:26.645589 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Meter_Band Row:map[action:drop rate:25] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996176} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:arp-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996177} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:arp-resolve-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996178} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:bfd-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996179} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:event-elb-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996180} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:icmp4-error-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996181} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:icmp6-error-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996182} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:reject-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996183} {Op:insert Table:Meter Row:map[bands:{GoSet:[{GoUUID:u2596996176}]} fair:{GoSet:[true]} name:tcp-reset-rate-limiter unit:pktps] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996184} {Op:insert Table:Copp Row:map[meters:{GoMap:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter tcp-reset:tcp-reset-rate-limiter]} name:ovnkube-default] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996185}]" I1115 05:39:26.646223 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="a8f8a2b9-cb22-4f0d-9f17-8a7db8a27575" I1115 05:39:26.646257 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="a8f8a2b9-cb22-4f0d-9f17-8a7db8a27575" "model"="&{UUID:a8f8a2b9-cb22-4f0d-9f17-8a7db8a27575 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914738 Name:arp-resolve-rate-limiter Unit:pktps}" I1115 05:39:26.646282 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="28a72559-65c5-4fe8-84c3-4f4ef2633719" I1115 05:39:26.646303 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="28a72559-65c5-4fe8-84c3-4f4ef2633719" "model"="&{UUID:28a72559-65c5-4fe8-84c3-4f4ef2633719 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914828 Name:reject-rate-limiter Unit:pktps}" I1115 05:39:26.646319 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="2f27322b-6ab6-4668-a579-3599ed97c405" I1115 05:39:26.646337 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="2f27322b-6ab6-4668-a579-3599ed97c405" "model"="&{UUID:2f27322b-6ab6-4668-a579-3599ed97c405 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914918 Name:icmp6-error-rate-limiter Unit:pktps}" I1115 05:39:26.646351 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="3b1c46fb-c7e9-41af-bdf6-eeef464141fc" I1115 05:39:26.646366 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="3b1c46fb-c7e9-41af-bdf6-eeef464141fc" "model"="&{UUID:3b1c46fb-c7e9-41af-bdf6-eeef464141fc Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914a08 Name:arp-rate-limiter Unit:pktps}" I1115 05:39:26.646382 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="47eb66d8-c577-490f-b4b0-b4068aa1efc5" I1115 05:39:26.646404 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="47eb66d8-c577-490f-b4b0-b4068aa1efc5" "model"="&{UUID:47eb66d8-c577-490f-b4b0-b4068aa1efc5 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914af8 Name:bfd-rate-limiter Unit:pktps}" I1115 05:39:26.646420 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="6ad83597-9f4c-455c-88f0-6ef493624ab4" I1115 05:39:26.646438 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="6ad83597-9f4c-455c-88f0-6ef493624ab4" "model"="&{UUID:6ad83597-9f4c-455c-88f0-6ef493624ab4 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914bf0 Name:tcp-reset-rate-limiter Unit:pktps}" I1115 05:39:26.646452 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="83d58a0b-cfd0-40ab-b87f-2e1e9307b9b0" I1115 05:39:26.646468 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="83d58a0b-cfd0-40ab-b87f-2e1e9307b9b0" "model"="&{UUID:83d58a0b-cfd0-40ab-b87f-2e1e9307b9b0 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914cd0 Name:event-elb-rate-limiter Unit:pktps}" I1115 05:39:26.646482 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter" "uuid"="853fa47d-f56c-48b2-bf7f-05e4c6bc9307" I1115 05:39:26.646511 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter" "uuid"="853fa47d-f56c-48b2-bf7f-05e4c6bc9307" "model"="&{UUID:853fa47d-f56c-48b2-bf7f-05e4c6bc9307 Bands:[0c181c1a-42a9-4b5e-894f-aa75125c466a] ExternalIDs:map[] Fair:0xc000914dc8 Name:icmp4-error-rate-limiter Unit:pktps}" I1115 05:39:26.646533 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Copp" "uuid"="361a0ade-a33a-40af-b4f6-dc5a6910be61" I1115 05:39:26.646563 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Copp" "uuid"="361a0ade-a33a-40af-b4f6-dc5a6910be61" "model"="&{UUID:361a0ade-a33a-40af-b4f6-dc5a6910be61 ExternalIDs:map[] Meters:map[arp:arp-rate-limiter arp-resolve:arp-resolve-rate-limiter bfd:bfd-rate-limiter event-elb:event-elb-rate-limiter icmp4-error:icmp4-error-rate-limiter icmp6-error:icmp6-error-rate-limiter reject:reject-rate-limiter tcp-reset:tcp-reset-rate-limiter] Name:ovnkube-default}" I1115 05:39:26.646582 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Meter_Band" "uuid"="0c181c1a-42a9-4b5e-894f-aa75125c466a" I1115 05:39:26.646602 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Meter_Band" "uuid"="0c181c1a-42a9-4b5e-894f-aa75125c466a" "model"="&{UUID:0c181c1a-42a9-4b5e-894f-aa75125c466a Action:drop BurstSize:0 ExternalIDs:map[] Rate:25}" I1115 05:39:26.646664 63045 shared_informer.go:285] caches populated I1115 05:39:26.646683 63045 shared_informer.go:285] caches populated I1115 05:39:26.646688 63045 shared_informer.go:285] caches populated I1115 05:39:26.646692 63045 shared_informer.go:285] caches populated I1115 05:39:26.646697 63045 shared_informer.go:285] caches populated I1115 05:39:26.646702 63045 shared_informer.go:285] caches populated I1115 05:39:26.646709 63045 ovn.go:342] Starting all the Watchers... I1115 05:39:26.646754 63045 egressgw.go:862] Syncing exgw routes took 35.732µs I1115 05:39:26.646828 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=default I1115 05:39:26.646851 63045 namespace.go:184] [default] adding namespace I1115 05:39:26.646925 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:default_v4]} name:a5154718082306775057] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996186}] I1115 05:39:26.646943 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:default_v4]} name:a5154718082306775057] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996186}] I1115 05:39:26.646970 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:default_v4]} name:a5154718082306775057] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996186}]" I1115 05:39:26.647003 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-ingress I1115 05:39:26.647012 63045 namespace.go:184] [openshift-ingress] adding namespace I1115 05:39:26.647026 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=kube-public I1115 05:39:26.647030 63045 namespace.go:184] [kube-public] adding namespace I1115 05:39:26.647043 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-storage I1115 05:39:26.647047 63045 namespace.go:184] [openshift-storage] adding namespace I1115 05:39:26.647059 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-infra I1115 05:39:26.647063 63045 namespace.go:184] [openshift-infra] adding namespace I1115 05:39:26.647076 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-dns I1115 05:39:26.647080 63045 namespace.go:184] [openshift-dns] adding namespace I1115 05:39:26.647092 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-service-ca I1115 05:39:26.647096 63045 namespace.go:184] [openshift-service-ca] adding namespace I1115 05:39:26.647108 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=kube-node-lease I1115 05:39:26.647120 63045 namespace.go:184] [kube-node-lease] adding namespace I1115 05:39:26.647131 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=kube-system I1115 05:39:26.647136 63045 namespace.go:184] [kube-system] adding namespace I1115 05:39:26.647373 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="993cad0e-4af9-4603-9ebe-c054f9c7b643" I1115 05:39:26.647405 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="993cad0e-4af9-4603-9ebe-c054f9c7b643" "model"="&{UUID:993cad0e-4af9-4603-9ebe-c054f9c7b643 Addresses:[] ExternalIDs:map[name:default_v4] Name:a5154718082306775057}" I1115 05:39:26.647435 63045 address_set.go:308] New(993cad0e-4af9-4603-9ebe-c054f9c7b643/default_v4/a5154718082306775057) with [] I1115 05:39:26.647443 63045 namespace.go:188] [default] adding namespace took 571.307µs I1115 05:39:26.647451 63045 obj_retry.go:1415] Creating *v1.Namespace default took: 600.306µs I1115 05:39:26.647511 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-ingress_v4]} name:a7228108612096671536] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996187}] I1115 05:39:26.647523 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-ingress_v4]} name:a7228108612096671536] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996187}] I1115 05:39:26.647549 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-ingress_v4]} name:a7228108612096671536] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996187}]" I1115 05:39:26.647769 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="c8d0d3b5-0ba2-4746-9b87-69e77f04047c" I1115 05:39:26.647798 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="c8d0d3b5-0ba2-4746-9b87-69e77f04047c" "model"="&{UUID:c8d0d3b5-0ba2-4746-9b87-69e77f04047c Addresses:[] ExternalIDs:map[name:openshift-ingress_v4] Name:a7228108612096671536}" I1115 05:39:26.647825 63045 address_set.go:308] New(c8d0d3b5-0ba2-4746-9b87-69e77f04047c/openshift-ingress_v4/a7228108612096671536) with [] I1115 05:39:26.647836 63045 namespace.go:188] [openshift-ingress] adding namespace took 813.662µs I1115 05:39:26.647844 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-ingress took: 829.385µs I1115 05:39:26.647884 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-public_v4]} name:a18363165982804349389] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996188}] I1115 05:39:26.647902 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-public_v4]} name:a18363165982804349389] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996188}] I1115 05:39:26.647928 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-public_v4]} name:a18363165982804349389] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996188}]" I1115 05:39:26.648136 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="8b49d554-6103-4404-9e5a-4f58ac0743db" I1115 05:39:26.648167 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="8b49d554-6103-4404-9e5a-4f58ac0743db" "model"="&{UUID:8b49d554-6103-4404-9e5a-4f58ac0743db Addresses:[] ExternalIDs:map[name:kube-public_v4] Name:a18363165982804349389}" I1115 05:39:26.648193 63045 address_set.go:308] New(8b49d554-6103-4404-9e5a-4f58ac0743db/kube-public_v4/a18363165982804349389) with [] I1115 05:39:26.648199 63045 namespace.go:188] [kube-public] adding namespace took 1.159676ms I1115 05:39:26.648207 63045 obj_retry.go:1415] Creating *v1.Namespace kube-public took: 1.173368ms I1115 05:39:26.648244 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-storage_v4]} name:a15748973720423176978] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996189}] I1115 05:39:26.648283 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-storage_v4]} name:a15748973720423176978] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996189}] I1115 05:39:26.648308 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-storage_v4]} name:a15748973720423176978] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996189}]" I1115 05:39:26.648526 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="28858321-dd27-4de7-853e-70d96eeed103" I1115 05:39:26.648552 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="28858321-dd27-4de7-853e-70d96eeed103" "model"="&{UUID:28858321-dd27-4de7-853e-70d96eeed103 Addresses:[] ExternalIDs:map[name:openshift-storage_v4] Name:a15748973720423176978}" I1115 05:39:26.648576 63045 address_set.go:308] New(28858321-dd27-4de7-853e-70d96eeed103/openshift-storage_v4/a15748973720423176978) with [] I1115 05:39:26.648585 63045 namespace.go:188] [openshift-storage] adding namespace took 1.529509ms I1115 05:39:26.648592 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-storage took: 1.542582ms I1115 05:39:26.648600 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-ovn-kubernetes I1115 05:39:26.648605 63045 namespace.go:184] [openshift-ovn-kubernetes] adding namespace I1115 05:39:26.648626 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-ovn-kubernetes_v4]} name:a3826097561732631257] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996190}] I1115 05:39:26.648641 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-ovn-kubernetes_v4]} name:a3826097561732631257] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996190}] I1115 05:39:26.648665 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-ovn-kubernetes_v4]} name:a3826097561732631257] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996190}]" I1115 05:39:26.648866 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="3ad95ae6-ecbe-46e0-9d4a-38cfda88bac9" I1115 05:39:26.648896 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="3ad95ae6-ecbe-46e0-9d4a-38cfda88bac9" "model"="&{UUID:3ad95ae6-ecbe-46e0-9d4a-38cfda88bac9 Addresses:[] ExternalIDs:map[name:openshift-ovn-kubernetes_v4] Name:a3826097561732631257}" I1115 05:39:26.648924 63045 address_set.go:308] New(3ad95ae6-ecbe-46e0-9d4a-38cfda88bac9/openshift-ovn-kubernetes_v4/a3826097561732631257) with [] I1115 05:39:26.648968 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-infra_v4]} name:a13488436752166948783] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996191}] I1115 05:39:26.648986 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-infra_v4]} name:a13488436752166948783] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996191}] I1115 05:39:26.649010 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-infra_v4]} name:a13488436752166948783] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996191}]" I1115 05:39:26.649035 63045 namespace.go:188] [openshift-ovn-kubernetes] adding namespace took 426.94µs I1115 05:39:26.649046 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-ovn-kubernetes took: 438.808µs I1115 05:39:26.649217 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="1f5c08cb-8f42-4f85-b600-7aca12b71227" I1115 05:39:26.649252 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="1f5c08cb-8f42-4f85-b600-7aca12b71227" "model"="&{UUID:1f5c08cb-8f42-4f85-b600-7aca12b71227 Addresses:[] ExternalIDs:map[name:openshift-infra_v4] Name:a13488436752166948783}" I1115 05:39:26.649284 63045 address_set.go:308] New(1f5c08cb-8f42-4f85-b600-7aca12b71227/openshift-infra_v4/a13488436752166948783) with [] I1115 05:39:26.649325 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-dns_v4]} name:a11840947999323393980] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996192}] I1115 05:39:26.649340 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-dns_v4]} name:a11840947999323393980] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996192}] I1115 05:39:26.649365 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-dns_v4]} name:a11840947999323393980] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996192}]" I1115 05:39:26.649390 63045 namespace.go:188] [openshift-infra] adding namespace took 2.317942ms I1115 05:39:26.649401 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-infra took: 2.335434ms I1115 05:39:26.649408 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-kube-controller-manager I1115 05:39:26.649412 63045 namespace.go:184] [openshift-kube-controller-manager] adding namespace I1115 05:39:26.649604 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="31da521f-e3bb-4921-b342-bda903443133" I1115 05:39:26.649636 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="31da521f-e3bb-4921-b342-bda903443133" "model"="&{UUID:31da521f-e3bb-4921-b342-bda903443133 Addresses:[] ExternalIDs:map[name:openshift-dns_v4] Name:a11840947999323393980}" I1115 05:39:26.649660 63045 address_set.go:308] New(31da521f-e3bb-4921-b342-bda903443133/openshift-dns_v4/a11840947999323393980) with [] I1115 05:39:26.649702 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-service-ca_v4]} name:a9769903554508400075] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996193}] I1115 05:39:26.649714 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-service-ca_v4]} name:a9769903554508400075] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996193}] I1115 05:39:26.649738 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-service-ca_v4]} name:a9769903554508400075] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996193}]" I1115 05:39:26.649760 63045 namespace.go:188] [openshift-dns] adding namespace took 2.671399ms I1115 05:39:26.649767 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-dns took: 2.684509ms I1115 05:39:26.649960 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="5a6731e4-5f9c-469d-a8a1-e8af11024a3d" I1115 05:39:26.649990 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="5a6731e4-5f9c-469d-a8a1-e8af11024a3d" "model"="&{UUID:5a6731e4-5f9c-469d-a8a1-e8af11024a3d Addresses:[] ExternalIDs:map[name:openshift-service-ca_v4] Name:a9769903554508400075}" I1115 05:39:26.650013 63045 address_set.go:308] New(5a6731e4-5f9c-469d-a8a1-e8af11024a3d/openshift-service-ca_v4/a9769903554508400075) with [] I1115 05:39:26.650051 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-node-lease_v4]} name:a16235039932615691331] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996194}] I1115 05:39:26.650063 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-node-lease_v4]} name:a16235039932615691331] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996194}] I1115 05:39:26.650088 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-node-lease_v4]} name:a16235039932615691331] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996194}]" I1115 05:39:26.650108 63045 namespace.go:188] [openshift-service-ca] adding namespace took 3.003252ms I1115 05:39:26.650120 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-service-ca took: 3.02133ms I1115 05:39:26.650305 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="af062544-f5a7-491d-9768-6f6a27047f68" I1115 05:39:26.650334 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="af062544-f5a7-491d-9768-6f6a27047f68" "model"="&{UUID:af062544-f5a7-491d-9768-6f6a27047f68 Addresses:[] ExternalIDs:map[name:kube-node-lease_v4] Name:a16235039932615691331}" I1115 05:39:26.650360 63045 address_set.go:308] New(af062544-f5a7-491d-9768-6f6a27047f68/kube-node-lease_v4/a16235039932615691331) with [] I1115 05:39:26.650403 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-system_v4]} name:a6937002112706621489] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996195}] I1115 05:39:26.650420 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-system_v4]} name:a6937002112706621489] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996195}] I1115 05:39:26.650444 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:kube-system_v4]} name:a6937002112706621489] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996195}]" I1115 05:39:26.650470 63045 namespace.go:188] [kube-node-lease] adding namespace took 3.342943ms I1115 05:39:26.650481 63045 obj_retry.go:1415] Creating *v1.Namespace kube-node-lease took: 3.366269ms I1115 05:39:26.650672 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="44dd7a65-2546-4dc5-8336-fbcda86a64bd" I1115 05:39:26.650701 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="44dd7a65-2546-4dc5-8336-fbcda86a64bd" "model"="&{UUID:44dd7a65-2546-4dc5-8336-fbcda86a64bd Addresses:[] ExternalIDs:map[name:kube-system_v4] Name:a6937002112706621489}" I1115 05:39:26.650722 63045 address_set.go:308] New(44dd7a65-2546-4dc5-8336-fbcda86a64bd/kube-system_v4/a6937002112706621489) with [] I1115 05:39:26.650750 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-kube-controller-manager_v4]} name:a10309787208889436959] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996196}] I1115 05:39:26.650763 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-kube-controller-manager_v4]} name:a10309787208889436959] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996196}] I1115 05:39:26.650788 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-kube-controller-manager_v4]} name:a10309787208889436959] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996196}]" I1115 05:39:26.650813 63045 namespace.go:188] [kube-system] adding namespace took 3.66602ms I1115 05:39:26.650823 63045 obj_retry.go:1415] Creating *v1.Namespace kube-system took: 3.684561ms I1115 05:39:26.650996 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="7c677890-25bd-4203-a067-aee4092b73bf" I1115 05:39:26.651023 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="7c677890-25bd-4203-a067-aee4092b73bf" "model"="&{UUID:7c677890-25bd-4203-a067-aee4092b73bf Addresses:[] ExternalIDs:map[name:openshift-kube-controller-manager_v4] Name:a10309787208889436959}" I1115 05:39:26.651046 63045 address_set.go:308] New(7c677890-25bd-4203-a067-aee4092b73bf/openshift-kube-controller-manager_v4/a10309787208889436959) with [] I1115 05:39:26.651056 63045 namespace.go:188] [openshift-kube-controller-manager] adding namespace took 1.638843ms I1115 05:39:26.651062 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-kube-controller-manager took: 1.647878ms I1115 05:39:26.651069 63045 obj_retry.go:1380] Add event received for *v1.Namespace, key=openshift-route-controller-manager I1115 05:39:26.651074 63045 namespace.go:184] [openshift-route-controller-manager] adding namespace I1115 05:39:26.651094 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-route-controller-manager_v4]} name:a1030867452693714651] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996197}] I1115 05:39:26.651104 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-route-controller-manager_v4]} name:a1030867452693714651] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996197}] I1115 05:39:26.651128 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-route-controller-manager_v4]} name:a1030867452693714651] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996197}]" I1115 05:39:26.651324 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="90d7e2ce-665d-4d9a-8ced-60083b00c0ab" I1115 05:39:26.651351 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="90d7e2ce-665d-4d9a-8ced-60083b00c0ab" "model"="&{UUID:90d7e2ce-665d-4d9a-8ced-60083b00c0ab Addresses:[] ExternalIDs:map[name:openshift-route-controller-manager_v4] Name:a1030867452693714651}" I1115 05:39:26.651377 63045 address_set.go:308] New(90d7e2ce-665d-4d9a-8ced-60083b00c0ab/openshift-route-controller-manager_v4/a1030867452693714651) with [] I1115 05:39:26.651386 63045 namespace.go:188] [openshift-route-controller-manager] adding namespace took 308.374µs I1115 05:39:26.651393 63045 obj_retry.go:1415] Creating *v1.Namespace openshift-route-controller-manager took: 316.347µs I1115 05:39:26.651405 63045 factory.go:546] Added *v1.Namespace event handler 1 I1115 05:39:26.651427 63045 master.go:1221] Node release-ci-ci-op-k5cwk1pv-7cb14 contains subnets: [] W1115 05:39:26.651446 63045 logical_switch_manager.go:590] Failed to get IPs for logical router port rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 W1115 05:39:26.651469 63045 master.go:1247] Did not find any logical switches with other-config I1115 05:39:26.651526 63045 obj_retry.go:1380] Add event received for *v1.Node, key=release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.651543 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:26.651551 63045 master.go:838] Failed to get node release-ci-ci-op-k5cwk1pv-7cb14 host subnets annotations: node "release-ci-ci-op-k5cwk1pv-7cb14" has no "k8s.ovn.org/node-subnets" annotation I1115 05:39:26.651563 63045 master.go:862] Expected 1 subnets on node release-ci-ci-op-k5cwk1pv-7cb14, found 0: [] I1115 05:39:26.651574 63045 master.go:904] Allocating subnet 10.42.0.0/24 on node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.651582 63045 master.go:938] Allocated Subnets [10.42.0.0/24] on Node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.651642 63045 kube.go:97] Setting annotations map[k8s.ovn.org/node-gateway-router-lrp-ifaddr:{"ipv4":"100.64.0.2/16"} k8s.ovn.org/node-subnets:{"default":"10.42.0.0/24"}] on node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.656399 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:0a:2a:00:01 name:rtos-release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[10.42.0.1/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996198}] I1115 05:39:26.656460 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996198}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.656476 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:0a:2a:00:01 name:rtos-release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[10.42.0.1/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996198} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996198}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.656539 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:0a:2a:00:01 name:rtos-release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[10.42.0.1/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996198} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996198}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.656957 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="5e6913a8-e256-48aa-8cb4-bd6613d3ba1f" I1115 05:39:26.656996 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="5e6913a8-e256-48aa-8cb4-bd6613d3ba1f" "model"="&{UUID:5e6913a8-e256-48aa-8cb4-bd6613d3ba1f Enabled: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: Ipv6Prefix:[] Ipv6RaConfigs:map[] MAC:0a:58:0a:2a:00:01 Name:rtos-release-ci-ci-op-k5cwk1pv-7cb14 Networks:[10.42.0.1/24] Options:map[] Peer:}" I1115 05:39:26.657013 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.657052 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999] StaticRoutes:[]}" I1115 05:39:26.657142 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} name:release-ci-ci-op-k5cwk1pv-7cb14 other_config:{GoMap:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996199}] I1115 05:39:26.657168 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Switch Row:map[] Rows:[map[name:release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc0009a3570 Where:[where column name == release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} name:release-ci-ci-op-k5cwk1pv-7cb14 other_config:{GoMap:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996199}] I1115 05:39:26.657219 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Switch Row:map[] Rows:[map[name:release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc0009a3570 Where:[where column name == release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch Row:map[load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} name:release-ci-ci-op-k5cwk1pv-7cb14 other_config:{GoMap:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996199}]" I1115 05:39:26.657555 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:26.657596 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "model"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[] QOSRules:[]}" I1115 05:39:26.657660 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-host-network_v4]} name:a15498572541984179350] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996200}] I1115 05:39:26.657677 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-host-network_v4]} name:a15498572541984179350] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996200}] I1115 05:39:26.657704 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Address_Set Row:map[external_ids:{GoMap:map[name:openshift-host-network_v4]} name:a15498572541984179350] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996200}]" I1115 05:39:26.657924 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="737f4735-9164-4036-91bb-daa4d869f77c" I1115 05:39:26.657959 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="737f4735-9164-4036-91bb-daa4d869f77c" "model"="&{UUID:737f4735-9164-4036-91bb-daa4d869f77c Addresses:[] ExternalIDs:map[name:openshift-host-network_v4] Name:a15498572541984179350}" I1115 05:39:26.657989 63045 address_set.go:308] New(737f4735-9164-4036-91bb-daa4d869f77c/openshift-host-network_v4/a15498572541984179350) with [] W1115 05:39:26.659594 63045 namespace.go:529] Unable to find namespace during ensure in informer cache or kube api server. Will defer configuring namespace. I1115 05:39:26.659612 63045 address_set.go:499] (737f4735-9164-4036-91bb-daa4d869f77c/openshift-host-network_v4/a15498572541984179350) adding IPs ([10.42.0.2 100.64.0.2]) to address set I1115 05:39:26.659645 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.2 100.64.0.2]}}] Timeout: Where:[where column _uuid == {737f4735-9164-4036-91bb-daa4d869f77c}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.659668 63045 transact.go:41] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.2 100.64.0.2]}}] Timeout: Where:[where column _uuid == {737f4735-9164-4036-91bb-daa4d869f77c}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.659701 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.2 100.64.0.2]}}] Timeout: Where:[where column _uuid == {737f4735-9164-4036-91bb-daa4d869f77c}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.659961 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="737f4735-9164-4036-91bb-daa4d869f77c" I1115 05:39:26.660004 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Address_Set" "uuid"="737f4735-9164-4036-91bb-daa4d869f77c" "new"="&{UUID:737f4735-9164-4036-91bb-daa4d869f77c Addresses:[10.42.0.2 100.64.0.2] ExternalIDs:map[name:openshift-host-network_v4] Name:a15498572541984179350}" "old"="&{UUID:737f4735-9164-4036-91bb-daa4d869f77c Addresses:[] ExternalIDs:map[name:openshift-host-network_v4] Name:a15498572541984179350}" I1115 05:39:26.660081 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:stor-release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996201}] I1115 05:39:26.660126 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996201}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.660148 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:stor-release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996201} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996201}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.660189 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:stor-release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996201} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996201}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.660787 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="16e74789-d810-4e2a-86f4-f17eb9166ace" I1115 05:39:26.660834 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="16e74789-d810-4e2a-86f4-f17eb9166ace" "model"="&{UUID:16e74789-d810-4e2a-86f4-f17eb9166ace Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:stor-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:26.660851 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:26.660901 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[] QOSRules:[]}" I1115 05:39:26.660956 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:16e74789-d810-4e2a-86f4-f17eb9166ace}]}}] Timeout: Where:[where column _uuid == {e5a23782-d9b0-4a5b-88e3-9e3f972e43f2}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.660976 63045 transact.go:41] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:16e74789-d810-4e2a-86f4-f17eb9166ace}]}}] Timeout: Where:[where column _uuid == {e5a23782-d9b0-4a5b-88e3-9e3f972e43f2}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.661002 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:16e74789-d810-4e2a-86f4-f17eb9166ace}]}}] Timeout: Where:[where column _uuid == {e5a23782-d9b0-4a5b-88e3-9e3f972e43f2}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.661245 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="e5a23782-d9b0-4a5b-88e3-9e3f972e43f2" I1115 05:39:26.661292 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="e5a23782-d9b0-4a5b-88e3-9e3f972e43f2" "new"="&{UUID:e5a23782-d9b0-4a5b-88e3-9e3f972e43f2 ACLs:[1c8e57af-0f1f-4634-bcb0-537ca919af96 99f0603a-ca15-45fa-9665-4d093f4e20ae] ExternalIDs:map[name:clusterRtrPortGroup] Name:clusterRtrPortGroup Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace]}" "old"="&{UUID:e5a23782-d9b0-4a5b-88e3-9e3f972e43f2 ACLs:[1c8e57af-0f1f-4634-bcb0-537ca919af96 99f0603a-ca15-45fa-9665-4d093f4e20ae] ExternalIDs:map[name:clusterRtrPortGroup] Name:clusterRtrPortGroup Ports:[]}" I1115 05:39:26.661335 63045 obj_retry.go:630] Node add failed for release-ci-ci-op-k5cwk1pv-7cb14, will try again later: [k8s.ovn.org/node-chassis-id annotation not found for node release-ci-ci-op-k5cwk1pv-7cb14, macAddress annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14" , k8s.ovn.org/l3-gateway-config annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14"] E1115 05:39:26.661357 63045 obj_retry.go:1410] Failed to create *v1.Node release-ci-ci-op-k5cwk1pv-7cb14, error: [k8s.ovn.org/node-chassis-id annotation not found for node release-ci-ci-op-k5cwk1pv-7cb14, macAddress annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14" , k8s.ovn.org/l3-gateway-config annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14"] I1115 05:39:26.661374 63045 factory.go:546] Added *v1.Node event handler 2 I1115 05:39:26.661389 63045 ovn.go:824] Starting OVN Service Controller: Using Endpoint Slices I1115 05:39:26.661456 63045 obj_retry.go:1429] Update event received for resource *v1.Node, old object is equal to new: false I1115 05:39:26.661470 63045 obj_retry.go:1472] Update event received for *v1.Node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.661503 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" E1115 05:39:26.661519 63045 obj_retry.go:1538] Failed to update *v1.Node, old=release-ci-ci-op-k5cwk1pv-7cb14, new=release-ci-ci-op-k5cwk1pv-7cb14, error: [k8s.ovn.org/node-chassis-id annotation not found for node release-ci-ci-op-k5cwk1pv-7cb14, macAddress annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14" , k8s.ovn.org/l3-gateway-config annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14"] I1115 05:39:26.661618 63045 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.661629 63045 reflector.go:255] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.661829 63045 reflector.go:219] Starting reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.661838 63045 reflector.go:255] Listing and watching *v1.EndpointSlice from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.661982 63045 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.661993 63045 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134 I1115 05:39:26.662083 63045 services_controller.go:155] Starting controller ovn-lb-controller I1115 05:39:26.662093 63045 services_controller.go:161] Waiting for informer caches to sync I1115 05:39:26.662097 63045 shared_informer.go:255] Waiting for caches to sync for ovn-lb-controller I1115 05:39:26.662115 63045 obj_retry.go:487] Recording add event on pod I1115 05:39:26.662124 63045 obj_retry.go:1380] Add event received for *v1.Pod, key=openshift-storage/topolvm-controller-8456864f89-vg42d I1115 05:39:26.662134 63045 obj_retry.go:1415] Creating *v1.Pod openshift-storage/topolvm-controller-8456864f89-vg42d took: 518ns I1115 05:39:26.662139 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.662151 63045 obj_retry.go:487] Recording add event on pod I1115 05:39:26.662156 63045 obj_retry.go:1380] Add event received for *v1.Pod, key=openshift-dns/node-resolver-jhcw4 I1115 05:39:26.662163 63045 obj_retry.go:1415] Creating *v1.Pod openshift-dns/node-resolver-jhcw4 took: 823ns I1115 05:39:26.662166 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.662173 63045 obj_retry.go:487] Recording add event on pod I1115 05:39:26.662177 63045 obj_retry.go:1380] Add event received for *v1.Pod, key=openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:26.662184 63045 obj_retry.go:1415] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 took: 211ns I1115 05:39:26.662187 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.662194 63045 obj_retry.go:487] Recording add event on pod I1115 05:39:26.662198 63045 obj_retry.go:1380] Add event received for *v1.Pod, key=openshift-ingress/router-default-76b7657c68-6xcfc I1115 05:39:26.662205 63045 obj_retry.go:1415] Creating *v1.Pod openshift-ingress/router-default-76b7657c68-6xcfc took: 82ns I1115 05:39:26.662208 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.662215 63045 obj_retry.go:487] Recording add event on pod I1115 05:39:26.662219 63045 obj_retry.go:1380] Add event received for *v1.Pod, key=openshift-service-ca/service-ca-77fc4cc659-dp8dn I1115 05:39:26.662225 63045 obj_retry.go:1415] Creating *v1.Pod openshift-service-ca/service-ca-77fc4cc659-dp8dn took: 125ns I1115 05:39:26.662229 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.662232 63045 obj_retry.go:487] Recording add event on pod I1115 05:39:26.662237 63045 obj_retry.go:1380] Add event received for *v1.Pod, key=openshift-ovn-kubernetes/ovnkube-node-b5wd2 I1115 05:39:26.662243 63045 obj_retry.go:1415] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 took: 227ns I1115 05:39:26.662246 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.662253 63045 factory.go:546] Added *v1.Pod event handler 3 I1115 05:39:26.662344 63045 factory.go:546] Added *v1.NetworkPolicy event handler 4 I1115 05:39:26.662358 63045 ovn.go:446] Completing all the Watchers took 15.64237ms I1115 05:39:26.662375 63045 topology_version.go:118] No version string found. The OVN topology is before versioning is introduced. Upgrade needed I1115 05:39:26.662445 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router Row:map[external_ids:{GoMap:map[k8s-cluster-router:yes k8s-ovn-topo-version:5]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.662465 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router Row:map[external_ids:{GoMap:map[k8s-cluster-router:yes k8s-ovn-topo-version:5]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.662523 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router Row:map[external_ids:{GoMap:map[k8s-cluster-router:yes k8s-ovn-topo-version:5]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.662568 63045 egress_services_controller.go:153] Starting Egress Services Controller I1115 05:39:26.662573 63045 shared_informer.go:255] Waiting for caches to sync for egressservices I1115 05:39:26.662861 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.662912 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" I1115 05:39:26.662936 63045 topology_version.go:49] Updated Logical_Router ovn_cluster_router topology version to 5 I1115 05:39:26.663664 63045 egress_services_endpointslice.go:81] Ignoring updating default/kubernetes for endpointslice default/kubernetes as it is not a known egress service I1115 05:39:26.663689 63045 services_controller.go:364] Adding service default/kubernetes I1115 05:39:26.663698 63045 services_controller.go:364] Adding service openshift-ingress/router-internal-default I1115 05:39:26.663703 63045 services_controller.go:364] Adding service openshift-dns/dns-default I1115 05:39:26.663716 63045 egress_services_endpointslice.go:81] Ignoring updating openshift-ingress/router-internal-default for endpointslice openshift-ingress/router-internal-default-vjpkx as it is not a known egress service I1115 05:39:26.663720 63045 egress_services_endpointslice.go:81] Ignoring updating openshift-dns/dns-default for endpointslice openshift-dns/dns-default-jxtwl as it is not a known egress service I1115 05:39:26.663879 63045 node_tracker.go:174] Processing possible switch / router updates for node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.663918 63045 node_tracker.go:191] Node release-ci-ci-op-k5cwk1pv-7cb14 has invalid / no gateway config: k8s.ovn.org/l3-gateway-config annotation not found for node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:26.663929 63045 node_tracker.go:148] Node release-ci-ci-op-k5cwk1pv-7cb14 switch + router changed, syncing services I1115 05:39:26.663933 63045 services_controller.go:344] Full service sync requested I1115 05:39:26.663939 63045 services_controller.go:364] Adding service default/kubernetes I1115 05:39:26.663943 63045 services_controller.go:364] Adding service openshift-ingress/router-internal-default I1115 05:39:26.663947 63045 services_controller.go:364] Adding service openshift-dns/dns-default I1115 05:39:26.667145 63045 topology_version.go:62] Updated ConfigMap openshift-ovn-kubernetes/control-plane-status topology version to 5 I1115 05:39:26.682308 63045 shared_informer.go:285] caches populated I1115 05:39:26.682330 63045 shared_informer.go:285] caches populated I1115 05:39:26.682335 63045 shared_informer.go:285] caches populated I1115 05:39:26.682341 63045 shared_informer.go:285] caches populated I1115 05:39:26.682345 63045 shared_informer.go:285] caches populated I1115 05:39:26.682350 63045 shared_informer.go:285] caches populated I1115 05:39:26.684377 63045 config.go:1304] Exec: /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-remote="unix:/var/run/ovn/ovnsb_db.sock" I1115 05:39:26.696389 63045 ovs.go:200] Exec(4): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=10.0.0.2 external_ids:ovn-remote-probe-interval=180000 external_ids:ovn-openflow-probe-interval=180 external_ids:hostname="release-ci-ci-op-k5cwk1pv-7cb14" external_ids:ovn-monitor-all=true external_ids:ovn-ofctrl-wait-before-clear=0 external_ids:ovn-enable-lflow-cache=false external_ids:ovn-memlimit-lflow-cache-kb=870 I1115 05:39:26.710311 63045 ovs.go:203] Exec(4): stdout: "" I1115 05:39:26.710330 63045 ovs.go:204] Exec(4): stderr: "" I1115 05:39:26.710348 63045 ovs.go:200] Exec(5): /usr/bin/ovs-vsctl --timeout=15 -- clear bridge br-int netflow -- clear bridge br-int sflow -- clear bridge br-int ipfix I1115 05:39:26.719618 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Chassis_Private" "uuid"="cf46ace0-5659-43fc-8b6b-ef639e9df9d4" I1115 05:39:26.719669 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Chassis_Private" "uuid"="cf46ace0-5659-43fc-8b6b-ef639e9df9d4" "model"="&{UUID:cf46ace0-5659-43fc-8b6b-ef639e9df9d4 Chassis: ExternalIDs:map[] Name:77436c83-1258-484f-b8d8-ec91acb3c8f3 NbCfg:0 NbCfgTimestamp:0}" I1115 05:39:26.719690 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Chassis" "uuid"="8ca710ab-2271-4c58-a733-b2e2cdca4660" I1115 05:39:26.719743 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Chassis" "uuid"="8ca710ab-2271-4c58-a733-b2e2cdca4660" "model"="&{UUID:8ca710ab-2271-4c58-a733-b2e2cdca4660 Encaps:[48522019-e3fa-4218-a4ff-d7c10a3f5dea] ExternalIDs:map[ct-no-masked-label:true datapath-type:system iface-types:bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan is-interconn:false ovn-bridge-mappings: ovn-chassis-mac-mappings: ovn-cms-options: ovn-enable-lflow-cache:false ovn-limit-lflow-cache: ovn-memlimit-lflow-cache-kb:870 ovn-monitor-all:true ovn-trim-limit-lflow-cache: ovn-trim-timeout-ms: ovn-trim-wmark-perc-lflow-cache: port-up-notif:true] Hostname:release-ci-ci-op-k5cwk1pv-7cb14 Name:77436c83-1258-484f-b8d8-ec91acb3c8f3 NbCfg:0 OtherConfig:map[ct-no-masked-label:true datapath-type:system iface-types:bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan is-interconn:false ovn-bridge-mappings: ovn-chassis-mac-mappings: ovn-cms-options: ovn-enable-lflow-cache:false ovn-limit-lflow-cache: ovn-memlimit-lflow-cache-kb:870 ovn-monitor-all:true ovn-trim-limit-lflow-cache: ovn-trim-timeout-ms: ovn-trim-wmark-perc-lflow-cache: port-up-notif:true] TransportZones:[] VtepLogicalSwitches:[]}" I1115 05:39:26.723066 63045 ovs.go:203] Exec(5): stdout: "" I1115 05:39:26.723085 63045 ovs.go:204] Exec(5): stderr: "" I1115 05:39:26.725623 63045 node.go:384] Node release-ci-ci-op-k5cwk1pv-7cb14 ready for ovn initialization with subnet 10.42.0.0/24 I1115 05:39:26.725648 63045 ovs.go:200] Exec(6): /usr/bin/ovn-sbctl --timeout=15 --no-leader-only --columns=up list Port_Binding I1115 05:39:26.734773 63045 ovs.go:203] Exec(6): stdout: "" I1115 05:39:26.734948 63045 ovs.go:204] Exec(6): stderr: "" I1115 05:39:26.734964 63045 node.go:313] Detected support for port binding with external IDs I1115 05:39:26.735036 63045 ovs.go:200] Exec(7): /usr/bin/ovs-vsctl --timeout=15 -- --if-exists del-port br-int k8s-release-ci- -- --may-exist add-port br-int ovn-k8s-mp0 -- set interface ovn-k8s-mp0 type=internal mtu_request=1400 external-ids:iface-id=k8s-release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.759461 63045 ovs.go:203] Exec(7): stdout: "" I1115 05:39:26.759479 63045 ovs.go:204] Exec(7): stderr: "" I1115 05:39:26.759511 63045 ovs.go:200] Exec(8): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 mac_in_use I1115 05:39:26.767359 63045 shared_informer.go:285] caches populated I1115 05:39:26.767375 63045 shared_informer.go:262] Caches are synced for egressservices I1115 05:39:26.767381 63045 shared_informer.go:255] Waiting for caches to sync for egressserviceendpointslices I1115 05:39:26.767389 63045 shared_informer.go:285] caches populated I1115 05:39:26.767393 63045 shared_informer.go:262] Caches are synced for egressserviceendpointslices I1115 05:39:26.767396 63045 shared_informer.go:255] Waiting for caches to sync for egressservicenodes I1115 05:39:26.767407 63045 shared_informer.go:285] caches populated I1115 05:39:26.767412 63045 shared_informer.go:262] Caches are synced for egressservicenodes I1115 05:39:26.767417 63045 egress_services_controller.go:173] Repairing Egress Services I1115 05:39:26.767484 63045 egress_services_node.go:367] Setting labels map[] on node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.767688 63045 shared_informer.go:285] caches populated I1115 05:39:26.767693 63045 shared_informer.go:262] Caches are synced for ovn-lb-controller I1115 05:39:26.767698 63045 repair.go:65] Starting repairing loop for services I1115 05:39:26.767763 63045 repair.go:117] Deleted 0 stale service LBs I1115 05:39:26.767781 63045 repair.go:67] Finished repairing loop for services: 77.858µs I1115 05:39:26.767787 63045 services_controller.go:173] Starting workers I1115 05:39:26.767816 63045 services_controller.go:241] Processing sync for service default/kubernetes I1115 05:39:26.767823 63045 services_controller.go:280] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 75b8e8a4-42f7-4abc-b0ba-5afada275bf7 198 0 2022-11-15 05:38:37 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] [{microshift Update v1 2022-11-15 05:38:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:26.767961 63045 kube.go:303] Getting endpoints for slice default/kubernetes I1115 05:39:26.767969 63045 kube.go:330] Adding slice kubernetes endpoints: [10.0.0.2], port: 6443 I1115 05:39:26.767975 63045 kube.go:346] LB Endpoints for default/kubernetes are: [10.0.0.2] / [] on port: 6443 I1115 05:39:26.767984 63045 services_controller.go:296] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:26.767992 63045 services_controller.go:297] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.1"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{"10.0.0.2"}, V6IPs:[]string{}, Port:6443}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:26.768017 63045 services_controller.go:303] Built service default/kubernetes cluster-wide LB []loadbalancer.LB{} I1115 05:39:26.768024 63045 services_controller.go:304] Built service default/kubernetes per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:26.768048 63045 services_controller.go:305] Service default/kubernetes has 0 cluster-wide and 1 per-node configs, making 0 and 1 load balancers I1115 05:39:26.768057 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB(nil), built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:26.768124 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996202}] I1115 05:39:26.768174 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996202}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.768193 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000a88bc0 Where:[where column name == Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996202} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996202}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.768306 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000a88bc0 Where:[where column name == Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996202} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996202}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.768396 63045 services_controller.go:241] Processing sync for service openshift-ingress/router-internal-default I1115 05:39:26.768403 63045 services_controller.go:280] Service router-internal-default retrieved from lister: &Service{ObjectMeta:{router-internal-default openshift-ingress afda66bc-4681-4006-9855-027e494837c6 322 0 2022-11-15 05:39:02 +0000 UTC map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[operator.openshift.io/spec-hash:94d40d813f37d0a9b7725c3a5d9733e785139a9335a8a38f01758b6b244ab402] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{}},"f:labels":{".":{},"f:ingresscontroller.operator.openshift.io/owning-ingresscontroller":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":1936,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{0 1936 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.43.73.144,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.73.144],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:26.768480 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:26.768486 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 0 I1115 05:39:26.768507 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:26.768515 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 0 I1115 05:39:26.768521 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:26.768528 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 0 I1115 05:39:26.768536 63045 services_controller.go:296] Built service openshift-ingress/router-internal-default LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.73.144"}, protocol:"TCP", inport:80, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.73.144"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.73.144"}, protocol:"TCP", inport:1936, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:26.768551 63045 services_controller.go:297] Built service openshift-ingress/router-internal-default LB per-node configs []services.lbConfig(nil) I1115 05:39:26.768567 63045 services_controller.go:303] Built service openshift-ingress/router-internal-default cluster-wide LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:80}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:443}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:1936}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} I1115 05:39:26.768585 63045 services_controller.go:304] Built service openshift-ingress/router-internal-default per-node LB []loadbalancer.LB{} I1115 05:39:26.768591 63045 services_controller.go:305] Service openshift-ingress/router-internal-default has 3 cluster-wide and 0 per-node configs, making 1 and 0 load balancers I1115 05:39:26.768599 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB(nil), built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:80}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:443}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:1936}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} I1115 05:39:26.768654 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.73.144:1936: 10.43.73.144:443: 10.43.73.144:80:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996203}] I1115 05:39:26.768695 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Load_Balancer_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996203}]}}] Timeout: Where:[where column _uuid == {7b81a844-05a7-4d75-90db-fc377eeda1a5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.768707 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_openshift-ingress/router-internal-default_TCP_cluster]] Columns:[name] Mutations:[] Timeout:0xc000a898f8 Where:[where column name == Service_openshift-ingress/router-internal-default_TCP_cluster] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.73.144:1936: 10.43.73.144:443: 10.43.73.144:80:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996203} {Op:mutate Table:Load_Balancer_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996203}]}}] Timeout: Where:[where column _uuid == {7b81a844-05a7-4d75-90db-fc377eeda1a5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.768793 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_openshift-ingress/router-internal-default_TCP_cluster]] Columns:[name] Mutations:[] Timeout:0xc000a898f8 Where:[where column name == Service_openshift-ingress/router-internal-default_TCP_cluster] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.73.144:1936: 10.43.73.144:443: 10.43.73.144:80:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996203} {Op:mutate Table:Load_Balancer_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996203}]}}] Timeout: Where:[where column _uuid == {7b81a844-05a7-4d75-90db-fc377eeda1a5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.768855 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:26.768861 63045 services_controller.go:280] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns b3f0582a-7984-43ab-a7fb-59fcf4f905df 327 0 2022-11-15 05:39:02 +0000 UTC map[] map[operator.openshift.io/spec-hash:c387daddabfc2dde4f8d3747fd4d4cc94e257885202c60f32bade610838704c3 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{},"f:service.beta.openshift.io/serving-cert-secret-name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9154,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:26.768926 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:26.768931 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:26.768936 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:26.768940 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:26.768943 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:26.768948 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:26.768954 63045 services_controller.go:296] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:26.768959 63045 services_controller.go:297] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"UDP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:9154, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:26.768983 63045 services_controller.go:303] Built service openshift-dns/dns-default cluster-wide LB []loadbalancer.LB{} I1115 05:39:26.768989 63045 services_controller.go:304] Built service openshift-dns/dns-default per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:26.769014 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:26.769027 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB(nil), built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:26.769084 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996204}] I1115 05:39:26.769111 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996205}] I1115 05:39:26.769146 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996204} {GoUUID:u2596996205}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.769161 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc00016e790 Where:[where column name == Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996204} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996205} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996204} {GoUUID:u2596996205}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.769274 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc00016e790 Where:[where column name == Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996204} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996205} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996204} {GoUUID:u2596996205}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.775788 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="c4d96c7e-1529-457a-9c00-dbe41d077136" I1115 05:39:26.775843 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="c4d96c7e-1529-457a-9c00-dbe41d077136" "model"="&{UUID:c4d96c7e-1529-457a-9c00-dbe41d077136 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes] HealthCheck:[] IPPortMappings:map[] Name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc0004ddc80 SelectionFields:[] Vips:map[10.43.0.1:443:10.0.0.2:6443]}" I1115 05:39:26.775864 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:26.775910 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" I1115 05:39:26.776020 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="b32a33dc-269e-4adc-a189-4e21c2044d70" I1115 05:39:26.776046 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="b32a33dc-269e-4adc-a189-4e21c2044d70" "model"="&{UUID:b32a33dc-269e-4adc-a189-4e21c2044d70 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-ingress/router-internal-default_TCP_cluster Options:map[event:false reject:true skip_snat:false] Protocol:0xc0004248a0 SelectionFields:[] Vips:map[10.43.73.144:1936: 10.43.73.144:443: 10.43.73.144:80:]}" I1115 05:39:26.776060 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer_Group" "uuid"="7b81a844-05a7-4d75-90db-fc377eeda1a5" I1115 05:39:26.776085 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Load_Balancer_Group" "uuid"="7b81a844-05a7-4d75-90db-fc377eeda1a5" "new"="&{UUID:7b81a844-05a7-4d75-90db-fc377eeda1a5 LoadBalancer:[b32a33dc-269e-4adc-a189-4e21c2044d70] Name:clusterLBGroup}" "old"="&{UUID:7b81a844-05a7-4d75-90db-fc377eeda1a5 LoadBalancer:[] Name:clusterLBGroup}" I1115 05:39:26.776232 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="2c790ce1-a33c-4f51-9824-b25b0b77e391" I1115 05:39:26.776258 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="2c790ce1-a33c-4f51-9824-b25b0b77e391" "model"="&{UUID:2c790ce1-a33c-4f51-9824-b25b0b77e391 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc00043c3f0 SelectionFields:[] Vips:map[10.43.0.10:53: 10.43.0.10:9154:]}" I1115 05:39:26.776279 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="69bf140c-11c9-48c5-ba36-a8ff6921bf07" I1115 05:39:26.776303 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="69bf140c-11c9-48c5-ba36-a8ff6921bf07" "model"="&{UUID:69bf140c-11c9-48c5-ba36-a8ff6921bf07 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc00043cd30 SelectionFields:[] Vips:map[10.43.0.10:53:]}" I1115 05:39:26.776315 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:26.776356 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" I1115 05:39:26.776398 63045 loadbalancer.go:205] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"} I1115 05:39:26.776444 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 7.588079ms I1115 05:39:26.776650 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996206}] I1115 05:39:26.776701 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996206}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.776720 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16 priority:102]] Columns:[priority match] Mutations:[] Timeout:0xc0001dd2c0 Where:[where column priority == 102 where column match == ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996206} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996206}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.776775 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16 priority:102]] Columns:[priority match] Mutations:[] Timeout:0xc0001dd2c0 Where:[where column priority == 102 where column match == ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996206} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996206}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.776848 63045 loadbalancer.go:205] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"} I1115 05:39:26.776885 63045 services_controller.go:245] Finished syncing service kubernetes on namespace default : 9.069556ms I1115 05:39:26.776905 63045 loadbalancer.go:205] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"} I1115 05:39:26.776936 63045 services_controller.go:245] Finished syncing service router-internal-default on namespace openshift-ingress : 8.539689ms I1115 05:39:26.776952 63045 repair.go:158] Running Service post-sync cleanup I1115 05:39:26.776984 63045 repair.go:176] Deleting 0 legacy LBs I1115 05:39:26.779080 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="7969e67a-bd78-4152-9fcb-9a1dda1b1bf8" I1115 05:39:26.779128 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="7969e67a-bd78-4152-9fcb-9a1dda1b1bf8" "model"="&{UUID:7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 Action:allow ExternalIDs:map[] Match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.42.0.0/16 Nexthop: Nexthops:[] Options:map[] Priority:102}" I1115 05:39:26.779147 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.779193 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" I1115 05:39:26.779292 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996207}] I1115 05:39:26.779330 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996207}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.779348 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16 priority:102]] Columns:[priority match] Mutations:[] Timeout:0xc000dfa748 Where:[where column priority == 102 where column match == ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996207} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996207}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.779409 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16 priority:102]] Columns:[priority match] Mutations:[] Timeout:0xc000dfa748 Where:[where column priority == 102 where column match == ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:allow match:ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996207} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996207}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.793985 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="SB_Global" "uuid"="a2e15b8c-b327-4492-aba9-203561da312a" I1115 05:39:26.794045 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="SB_Global" "uuid"="a2e15b8c-b327-4492-aba9-203561da312a" "new"="&{UUID:a2e15b8c-b327-4492-aba9-203561da312a Connections:[] ExternalIDs:map[] Ipsec:false NbCfg:0 Options:map[mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true] SSL:}" "old"="&{UUID:a2e15b8c-b327-4492-aba9-203561da312a Connections:[] ExternalIDs:map[] Ipsec:false NbCfg:0 Options:map[] SSL:}" I1115 05:39:26.794062 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="03caccf0-c9e5-40b2-b775-78ddd6e1ae32" I1115 05:39:26.794090 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="03caccf0-c9e5-40b2-b775-78ddd6e1ae32" "model"="&{UUID:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 ExternalIDs:map[logical-switch:55ba86b1-407f-4f90-86ba-a2378c8d6ccc name:release-ci-ci-op-k5cwk1pv-7cb14] LoadBalancers:[] TunnelKey:1}" I1115 05:39:26.794108 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="831330e5-54ca-447f-b92c-4caa0cfec9eb" I1115 05:39:26.794126 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="831330e5-54ca-447f-b92c-4caa0cfec9eb" "model"="&{UUID:831330e5-54ca-447f-b92c-4caa0cfec9eb ExternalIDs:map[logical-router:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 name:ovn_cluster_router] LoadBalancers:[] TunnelKey:3}" I1115 05:39:26.794144 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="fec065db-9b49-4799-9d27-bae0364a24f2" I1115 05:39:26.794161 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="fec065db-9b49-4799-9d27-bae0364a24f2" "model"="&{UUID:fec065db-9b49-4799-9d27-bae0364a24f2 ExternalIDs:map[logical-switch:4a7292ab-0326-4438-83d1-4c6f4765fce0 name:join] LoadBalancers:[] TunnelKey:2}" I1115 05:39:26.794176 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="0db1f757-3d50-4b20-a8e6-1b9eefa7986c" I1115 05:39:26.794212 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="0db1f757-3d50-4b20-a8e6-1b9eefa7986c" "model"="&{UUID:0db1f757-3d50-4b20-a8e6-1b9eefa7986c Chassis: Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:stor-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[peer:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc0000a3808 VirtualParent:}" I1115 05:39:26.794249 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="1305a513-d458-4dfe-82b4-5599e68c9f61" I1115 05:39:26.794290 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="1305a513-d458-4dfe-82b4-5599e68c9f61" "model"="&{UUID:1305a513-d458-4dfe-82b4-5599e68c9f61 Chassis: Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoj-ovn_cluster_router MAC:[0a:58:64:40:00:01 100.64.0.1/16] NatAddresses:[] Options:map[peer:jtor-ovn_cluster_router] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:patch Up:0xc0000a3a20 VirtualParent:}" I1115 05:39:26.794309 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="6c7e5a94-dfdd-4ffe-971f-d215f164aac7" I1115 05:39:26.794333 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="6c7e5a94-dfdd-4ffe-971f-d215f164aac7" "model"="&{UUID:6c7e5a94-dfdd-4ffe-971f-d215f164aac7 Chassis: Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtos-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:0a:2a:00:01 10.42.0.1/24] NatAddresses:[] Options:map[peer:stor-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc0000a3bd8 VirtualParent:}" I1115 05:39:26.794357 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="f6714664-44fe-4f35-a44a-76dc46eadf2a" I1115 05:39:26.794386 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="f6714664-44fe-4f35-a44a-76dc46eadf2a" "model"="&{UUID:f6714664-44fe-4f35-a44a-76dc46eadf2a Chassis: Datapath:fec065db-9b49-4799-9d27-bae0364a24f2 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:jtor-ovn_cluster_router MAC:[router] NatAddresses:[] Options:map[peer:rtoj-ovn_cluster_router] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc0000a3dc0 VirtualParent:}" I1115 05:39:26.794480 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="a3f0805d-e3db-4a3a-86f8-49cb514ad8f9" I1115 05:39:26.794513 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="a3f0805d-e3db-4a3a-86f8-49cb514ad8f9" "model"="&{UUID:a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 Action:allow ExternalIDs:map[] Match:ip4.src == 10.42.0.0/16 && ip4.dst == 100.64.0.0/16 Nexthop: Nexthops:[] Options:map[] Priority:102}" I1115 05:39:26.794527 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.794567 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" I1115 05:39:26.794647 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="NB_Global" "uuid"="975338d6-246c-4dca-9a4f-b865d4f805e9" I1115 05:39:26.794688 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="NB_Global" "uuid"="975338d6-246c-4dca-9a4f-b865d4f805e9" "new"="&{UUID:975338d6-246c-4dca-9a4f-b865d4f805e9 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name: NbCfg:0 NbCfgTimestamp:0 Options:map[mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true] SbCfg:0 SbCfgTimestamp:0 SSL:}" "old"="&{UUID:975338d6-246c-4dca-9a4f-b865d4f805e9 Connections:[] ExternalIDs:map[] HvCfg:0 HvCfgTimestamp:0 Ipsec:false Name: NbCfg:0 NbCfgTimestamp:0 Options:map[northd_probe_interval:5000 use_logical_dp_groups:true] SbCfg:0 SbCfgTimestamp:0 SSL:}" I1115 05:39:26.794700 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="402f3397-e1d9-4671-bc59-c4e9a435a625" I1115 05:39:26.794736 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="402f3397-e1d9-4671-bc59-c4e9a435a625" "new"="&{UUID:402f3397-e1d9-4671-bc59-c4e9a435a625 Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:jtor-ovn_cluster_router Options:map[router-port:rtoj-ovn_cluster_router] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:0xc000276968}" "old"="&{UUID:402f3397-e1d9-4671-bc59-c4e9a435a625 Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:jtor-ovn_cluster_router Options:map[router-port:rtoj-ovn_cluster_router] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:26.794746 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="16e74789-d810-4e2a-86f4-f17eb9166ace" I1115 05:39:26.794772 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="16e74789-d810-4e2a-86f4-f17eb9166ace" "new"="&{UUID:16e74789-d810-4e2a-86f4-f17eb9166ace Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:stor-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:0xc000276b18}" "old"="&{UUID:16e74789-d810-4e2a-86f4-f17eb9166ace Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:stor-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:26.794817 63045 egress_services_node.go:169] Processing sync for Egress Service node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:26.794914 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[node:release-ci-ci-op-k5cwk1pv-7cb14]} match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996208}] I1115 05:39:26.794951 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996208}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.794969 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102]] Columns:[priority match] Mutations:[] Timeout:0xc000276ce8 Where:[where column priority == 102 where column match == ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[node:release-ci-ci-op-k5cwk1pv-7cb14]} match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996208} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996208}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:26.795027 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102]] Columns:[priority match] Mutations:[] Timeout:0xc000276ce8 Where:[where column priority == 102 where column match == ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[node:release-ci-ci-op-k5cwk1pv-7cb14]} match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996208} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996208}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:26.798149 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="49b06b1c-7370-4cfc-8440-1876c45a4898" I1115 05:39:26.798193 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="49b06b1c-7370-4cfc-8440-1876c45a4898" "model"="&{UUID:49b06b1c-7370-4cfc-8440-1876c45a4898 Action:allow ExternalIDs:map[node:release-ci-ci-op-k5cwk1pv-7cb14] Match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 Nexthop: Nexthops:[] Options:map[] Priority:102}" I1115 05:39:26.798212 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:26.798259 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" I1115 05:39:26.798304 63045 egress_services_node.go:172] Finished syncing Egress Service node release-ci-ci-op-k5cwk1pv-7cb14: 3.487164ms I1115 05:39:26.806576 63045 ovs.go:203] Exec(8): stdout: "\"52:17:ad:e6:11:7d\"\n" I1115 05:39:26.806614 63045 ovs.go:204] Exec(8): stderr: "" I1115 05:39:26.806630 63045 ovs.go:200] Exec(9): /usr/bin/ovs-vsctl --timeout=15 set interface ovn-k8s-mp0 mac=52\:17\:ad\:e6\:11\:7d I1115 05:39:26.817643 63045 ovs.go:203] Exec(9): stdout: "" I1115 05:39:26.817670 63045 ovs.go:204] Exec(9): stderr: "" I1115 05:39:26.856351 63045 gateway_init.go:259] Initializing Gateway Functionality I1115 05:39:26.856532 63045 gateway_localnet.go:171] Node local addresses initialized to: map[10.0.0.2:{10.0.0.2 ffffffff} 10.42.0.2:{10.42.0.0 ffffff00} 127.0.0.1:{127.0.0.0 ff000000} ::1:{::1 ffffffffffffffffffffffffffffffff} fe80::5017:adff:fee6:117d:{fe80:: ffffffffffffffff0000000000000000} fe80::c8b8:1f81:4b0c:7b33:{fe80:: ffffffffffffffff0000000000000000}] I1115 05:39:26.856650 63045 helper_linux.go:69] Provided gateway interface "br-ex", found as index: 4 I1115 05:39:26.856733 63045 helper_linux.go:94] Found default gateway interface br-ex 10.0.0.1 I1115 05:39:26.856816 63045 gateway_init.go:308] Preparing Local Gateway I1115 05:39:26.856824 63045 gateway_localnet.go:24] Creating new local gateway I1115 05:39:26.856834 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: FORWARD with args: "-i ovn-k8s-mp0 -j ACCEPT" for protocol: 0 I1115 05:39:26.861109 63045 gateway_iptables.go:70] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:26.869756 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: FORWARD with args: "-o ovn-k8s-mp0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT" for protocol: 0 I1115 05:39:26.873885 63045 gateway_iptables.go:70] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:26.882597 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: INPUT with args: "-i ovn-k8s-mp0 -m comment --comment from OVN to localhost -j ACCEPT" for protocol: 0 I1115 05:39:26.886711 63045 gateway_iptables.go:70] Chain: "INPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N INPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:26.891474 63045 obj_retry.go:1429] Update event received for resource *v1.Pod, old object is equal to new: false I1115 05:39:26.891509 63045 obj_retry.go:502] Recording update event on pod I1115 05:39:26.891522 63045 obj_retry.go:1472] Update event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:26.891530 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:26.895940 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: POSTROUTING with args: "-s 10.42.0.0/24 -j MASQUERADE" for protocol: 0 I1115 05:39:26.899993 63045 gateway_iptables.go:70] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:26.908765 63045 ovs.go:200] Exec(10): /usr/bin/ovs-vsctl --timeout=15 port-to-br br-ex I1115 05:39:26.919042 63045 ovs.go:203] Exec(10): stdout: "" I1115 05:39:26.919064 63045 ovs.go:204] Exec(10): stderr: "ovs-vsctl: no port named br-ex\n" I1115 05:39:26.919071 63045 ovs.go:206] Exec(10): err: exit status 1 I1115 05:39:26.919089 63045 ovs.go:200] Exec(11): /usr/bin/ovs-vsctl --timeout=15 br-exists br-ex I1115 05:39:26.929170 63045 ovs.go:203] Exec(11): stdout: "" I1115 05:39:26.929241 63045 ovs.go:204] Exec(11): stderr: "" I1115 05:39:26.929256 63045 ovs.go:200] Exec(12): /usr/bin/ovs-vsctl --timeout=15 list-ports br-ex I1115 05:39:26.939080 63045 ovs.go:203] Exec(12): stdout: "eth0\n" I1115 05:39:26.939104 63045 ovs.go:204] Exec(12): stderr: "" I1115 05:39:26.939116 63045 ovs.go:200] Exec(13): /usr/bin/ovs-vsctl --timeout=15 get Port eth0 Interfaces I1115 05:39:26.948855 63045 ovs.go:203] Exec(13): stdout: "[e5c6e5fe-8a6c-4a2e-b4be-9ba83bd620c0]\n" I1115 05:39:26.948878 63045 ovs.go:204] Exec(13): stderr: "" I1115 05:39:26.948896 63045 ovs.go:200] Exec(14): /usr/bin/ovs-vsctl --timeout=15 get Interface e5c6e5fe-8a6c-4a2e-b4be-9ba83bd620c0 Type I1115 05:39:26.958567 63045 ovs.go:203] Exec(14): stdout: "system\n" I1115 05:39:26.958590 63045 ovs.go:204] Exec(14): stderr: "" I1115 05:39:26.958604 63045 ovs.go:200] Exec(15): /usr/bin/ovs-vsctl --timeout=15 get interface eth0 ofport I1115 05:39:26.968477 63045 ovs.go:203] Exec(15): stdout: "1\n" I1115 05:39:26.968511 63045 ovs.go:204] Exec(15): stderr: "" I1115 05:39:26.968523 63045 ovs.go:200] Exec(16): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface br-ex mac_in_use I1115 05:39:26.978189 63045 ovs.go:203] Exec(16): stdout: "\"42:01:0a:00:00:02\"\n" I1115 05:39:26.978211 63045 ovs.go:204] Exec(16): stderr: "" I1115 05:39:26.978226 63045 ovs.go:200] Exec(17): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:ovn-bridge-mappings I1115 05:39:26.987860 63045 ovs.go:203] Exec(17): stdout: "\n" I1115 05:39:26.987882 63045 ovs.go:204] Exec(17): stderr: "" I1115 05:39:26.987895 63045 ovs.go:200] Exec(18): /usr/bin/ovs-vsctl --timeout=15 set Open_vSwitch . external_ids:ovn-bridge-mappings=physnet:br-ex I1115 05:39:26.999700 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Chassis" "uuid"="8ca710ab-2271-4c58-a733-b2e2cdca4660" I1115 05:39:26.999836 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Chassis" "uuid"="8ca710ab-2271-4c58-a733-b2e2cdca4660" "new"="&{UUID:8ca710ab-2271-4c58-a733-b2e2cdca4660 Encaps:[48522019-e3fa-4218-a4ff-d7c10a3f5dea] ExternalIDs:map[ct-no-masked-label:true datapath-type:system iface-types:bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan is-interconn:false ovn-bridge-mappings:physnet:br-ex ovn-chassis-mac-mappings: ovn-cms-options: ovn-enable-lflow-cache:false ovn-limit-lflow-cache: ovn-memlimit-lflow-cache-kb:870 ovn-monitor-all:true ovn-trim-limit-lflow-cache: ovn-trim-timeout-ms: ovn-trim-wmark-perc-lflow-cache: port-up-notif:true] Hostname:release-ci-ci-op-k5cwk1pv-7cb14 Name:77436c83-1258-484f-b8d8-ec91acb3c8f3 NbCfg:0 OtherConfig:map[ct-no-masked-label:true datapath-type:system iface-types:bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan is-interconn:false ovn-bridge-mappings:physnet:br-ex ovn-chassis-mac-mappings: ovn-cms-options: ovn-enable-lflow-cache:false ovn-limit-lflow-cache: ovn-memlimit-lflow-cache-kb:870 ovn-monitor-all:true ovn-trim-limit-lflow-cache: ovn-trim-timeout-ms: ovn-trim-wmark-perc-lflow-cache: port-up-notif:true] TransportZones:[] VtepLogicalSwitches:[]}" "old"="&{UUID:8ca710ab-2271-4c58-a733-b2e2cdca4660 Encaps:[48522019-e3fa-4218-a4ff-d7c10a3f5dea] ExternalIDs:map[ct-no-masked-label:true datapath-type:system iface-types:bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan is-interconn:false ovn-bridge-mappings: ovn-chassis-mac-mappings: ovn-cms-options: ovn-enable-lflow-cache:false ovn-limit-lflow-cache: ovn-memlimit-lflow-cache-kb:870 ovn-monitor-all:true ovn-trim-limit-lflow-cache: ovn-trim-timeout-ms: ovn-trim-wmark-perc-lflow-cache: port-up-notif:true] Hostname:release-ci-ci-op-k5cwk1pv-7cb14 Name:77436c83-1258-484f-b8d8-ec91acb3c8f3 NbCfg:0 OtherConfig:map[ct-no-masked-label:true datapath-type:system iface-types:bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan is-interconn:false ovn-bridge-mappings: ovn-chassis-mac-mappings: ovn-cms-options: ovn-enable-lflow-cache:false ovn-limit-lflow-cache: ovn-memlimit-lflow-cache-kb:870 ovn-monitor-all:true ovn-trim-limit-lflow-cache: ovn-trim-timeout-ms: ovn-trim-wmark-perc-lflow-cache: port-up-notif:true] TransportZones:[] VtepLogicalSwitches:[]}" I1115 05:39:27.000662 63045 ovs.go:203] Exec(18): stdout: "" I1115 05:39:27.000679 63045 ovs.go:204] Exec(18): stderr: "" I1115 05:39:27.000692 63045 ovs.go:200] Exec(19): /usr/bin/ovs-vsctl --timeout=15 --if-exists get Open_vSwitch . external_ids:system-id I1115 05:39:27.010775 63045 ovs.go:203] Exec(19): stdout: "\"77436c83-1258-484f-b8d8-ec91acb3c8f3\"\n" I1115 05:39:27.010798 63045 ovs.go:204] Exec(19): stderr: "" I1115 05:39:27.010810 63045 ovs.go:200] Exec(20): /usr/bin/ovs-appctl --timeout=15 dpif/show-dp-features br-ex I1115 05:39:27.019220 63045 ovs.go:203] Exec(20): stdout: "Masked set action: Yes\nTunnel push pop: No\nUfid: Yes\nTruncate action: Yes\nClone action: Yes\nSample nesting: 10\nConntrack eventmask: Yes\nConntrack clear: Yes\nMax dp_hash algorithm: 0\nCheck pkt length action: Yes\nConntrack timeout policy: Yes\nExplicit Drop action: No\nOptimized Balance TCP mode: No\nConntrack all-zero IP SNAT: Yes\nMPLS Label add: Yes\nMax VLAN headers: 2\nMax MPLS depth: 3\nRecirc: Yes\nCT state: Yes\nCT zone: Yes\nCT mark: Yes\nCT label: Yes\nCT state NAT: Yes\nCT orig tuple: Yes\nCT orig tuple for IPv6: Yes\nIPv6 ND Extension: No\n" I1115 05:39:27.019250 63045 ovs.go:204] Exec(20): stderr: "" I1115 05:39:27.019365 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 I1115 05:39:27.029344 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 I1115 05:39:27.034090 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22623 -j REJECT" for protocol: 0 I1115 05:39:27.038461 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22624 -j REJECT" for protocol: 0 I1115 05:39:27.043176 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22623 --syn -j REJECT" for protocol: 0 I1115 05:39:27.047426 63045 gateway_iptables.go:70] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.056762 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: FORWARD with args: "-p tcp -m tcp --dport 22624 --syn -j REJECT" for protocol: 0 I1115 05:39:27.060991 63045 gateway_iptables.go:70] Chain: "FORWARD" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N FORWARD --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.070323 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22623 --syn -j REJECT" for protocol: 0 I1115 05:39:27.074618 63045 gateway_iptables.go:70] Chain: "OUTPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.083416 63045 gateway_iptables.go:67] Adding rule in table: filter, chain: OUTPUT with args: "-p tcp -m tcp --dport 22624 --syn -j REJECT" for protocol: 0 I1115 05:39:27.087744 63045 gateway_iptables.go:70] Chain: "OUTPUT" in table: "filter" already exists, skipping creation: running [/usr/sbin/iptables -t filter -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.096403 63045 gateway_localnet.go:142] Local Gateway Creation Complete I1115 05:39:27.096533 63045 node.go:799] MTU (1460) of network interface br-ex is big enough to deal with Geneve header overhead (sum 1458). I1115 05:39:27.096549 63045 kube.go:97] Setting annotations map[k8s.ovn.org/l3-gateway-config:{"default":{"mode":"local","interface-id":"br-ex_release-ci-ci-op-k5cwk1pv-7cb14","mac-address":"42:01:0a:00:00:02","ip-addresses":["10.0.0.2/32"],"ip-address":"10.0.0.2/32","next-hops":["10.0.0.1"],"next-hop":"10.0.0.1","node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:77436c83-1258-484f-b8d8-ec91acb3c8f3 k8s.ovn.org/node-mgmt-port-mac-address:52:17:ad:e6:11:7d k8s.ovn.org/node-primary-ifaddr:{"ipv4":"10.0.0.2/32"}] on node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:27.101654 63045 obj_retry.go:1429] Update event received for resource *v1.Node, old object is equal to new: false I1115 05:39:27.101680 63045 obj_retry.go:1472] Update event received for *v1.Node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:27.101699 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:27.101812 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:0a:2a:00:01 networks:{GoSet:[10.42.0.1/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.101875 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.101890 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:0a:2a:00:01 networks:{GoSet:[10.42.0.1/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.101944 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:0a:2a:00:01 networks:{GoSet:[10.42.0.1/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.102156 63045 node.go:432] Waiting for gateway and management port readiness... I1115 05:39:27.102228 63045 ovs.go:200] Exec(21): /usr/bin/ovs-appctl --timeout=15 -t /var/run/ovn/ovn-controller.62788.ctl connection-status I1115 05:39:27.110603 63045 ovs.go:200] Exec(22): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 ofport I1115 05:39:27.119920 63045 node_tracker.go:174] Processing possible switch / router updates for node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:27.120024 63045 node_tracker.go:148] Node release-ci-ci-op-k5cwk1pv-7cb14 switch + router changed, syncing services I1115 05:39:27.120031 63045 services_controller.go:344] Full service sync requested I1115 05:39:27.120039 63045 services_controller.go:364] Adding service default/kubernetes I1115 05:39:27.120048 63045 services_controller.go:364] Adding service openshift-ingress/router-internal-default I1115 05:39:27.120055 63045 services_controller.go:364] Adding service openshift-dns/dns-default I1115 05:39:27.120066 63045 services_controller.go:241] Processing sync for service default/kubernetes I1115 05:39:27.120072 63045 services_controller.go:280] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 75b8e8a4-42f7-4abc-b0ba-5afada275bf7 198 0 2022-11-15 05:38:37 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] [{microshift Update v1 2022-11-15 05:38:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.120165 63045 kube.go:303] Getting endpoints for slice default/kubernetes I1115 05:39:27.120174 63045 kube.go:330] Adding slice kubernetes endpoints: [10.0.0.2], port: 6443 I1115 05:39:27.120181 63045 kube.go:346] LB Endpoints for default/kubernetes are: [10.0.0.2] / [] on port: 6443 I1115 05:39:27.120189 63045 services_controller.go:296] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.120195 63045 services_controller.go:297] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.1"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{"10.0.0.2"}, V6IPs:[]string{}, Port:6443}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.120225 63045 services_controller.go:303] Built service default/kubernetes cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.120235 63045 services_controller.go:304] Built service default/kubernetes per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.120269 63045 services_controller.go:305] Service default/kubernetes has 0 cluster-wide and 1 per-node configs, making 0 and 2 load balancers I1115 05:39:27.120280 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"c4d96c7e-1529-457a-9c00-dbe41d077136", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.120405 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.120472 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996209}] I1115 05:39:27.120521 63045 services_controller.go:245] Finished syncing service kubernetes on namespace default : 454.94µs I1115 05:39:27.120545 63045 services_controller.go:218] "Error syncing service, retrying" service="default/kubernetes" err="failed to ensure service default/kubernetes load balancers: object not found" I1115 05:39:27.120567 63045 services_controller.go:241] Processing sync for service openshift-ingress/router-internal-default I1115 05:39:27.120573 63045 services_controller.go:280] Service router-internal-default retrieved from lister: &Service{ObjectMeta:{router-internal-default openshift-ingress afda66bc-4681-4006-9855-027e494837c6 322 0 2022-11-15 05:39:02 +0000 UTC map[ingresscontroller.operator.openshift.io/owning-ingresscontroller:default] map[operator.openshift.io/spec-hash:94d40d813f37d0a9b7725c3a5d9733e785139a9335a8a38f01758b6b244ab402] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{}},"f:labels":{".":{},"f:ingresscontroller.operator.openshift.io/owning-ingresscontroller":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":1936,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{1 0 http},NodePort:0,AppProtocol:nil,},ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:1936,TargetPort:{0 1936 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.43.73.144,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.73.144],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.120636 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:27.120641 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 0 I1115 05:39:27.120646 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:27.120651 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 0 I1115 05:39:27.120656 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:27.120660 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 0 I1115 05:39:27.120665 63045 services_controller.go:296] Built service openshift-ingress/router-internal-default LB cluster-wide configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.73.144"}, protocol:"TCP", inport:80, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.73.144"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.73.144"}, protocol:"TCP", inport:1936, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.120681 63045 services_controller.go:297] Built service openshift-ingress/router-internal-default LB per-node configs []services.lbConfig(nil) I1115 05:39:27.120695 63045 services_controller.go:303] Built service openshift-ingress/router-internal-default cluster-wide LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-ingress/router-internal-default_TCP_cluster", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-ingress/router-internal-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:80}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:443}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.73.144", Port:1936}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{"clusterLBGroup"}}} I1115 05:39:27.120716 63045 services_controller.go:304] Built service openshift-ingress/router-internal-default per-node LB []loadbalancer.LB{} I1115 05:39:27.120723 63045 services_controller.go:305] Service openshift-ingress/router-internal-default has 3 cluster-wide and 0 per-node configs, making 1 and 0 load balancers I1115 05:39:27.120743 63045 services_controller.go:314] Skipping no-op change for service openshift-ingress/router-internal-default I1115 05:39:27.120748 63045 services_controller.go:245] Finished syncing service router-internal-default on namespace openshift-ingress : 181.298µs I1115 05:39:27.120756 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:27.120760 63045 services_controller.go:280] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns b3f0582a-7984-43ab-a7fb-59fcf4f905df 327 0 2022-11-15 05:39:02 +0000 UTC map[] map[operator.openshift.io/spec-hash:c387daddabfc2dde4f8d3747fd4d4cc94e257885202c60f32bade610838704c3 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{},"f:service.beta.openshift.io/serving-cert-secret-name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9154,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.120805 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.120809 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.120814 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.120818 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.120822 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.120826 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.120831 63045 services_controller.go:296] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.120836 63045 services_controller.go:297] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"UDP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:9154, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.120864 63045 services_controller.go:303] Built service openshift-dns/dns-default cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.120875 63045 services_controller.go:304] Built service openshift-dns/dns-default per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.120904 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:27.120916 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"2c790ce1-a33c-4f51-9824-b25b0b77e391", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"69bf140c-11c9-48c5-ba36-a8ff6921bf07", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.120999 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996210}] I1115 05:39:27.121032 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996211}] I1115 05:39:27.121080 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996210} {GoUUID:u2596996211}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.121105 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 349.223µs I1115 05:39:27.121119 63045 services_controller.go:218] "Error syncing service, retrying" service="openshift-dns/dns-default" err="failed to ensure service openshift-dns/dns-default load balancers: object not found" I1115 05:39:27.121158 63045 ovs.go:203] Exec(21): stdout: "connected\n" I1115 05:39:27.121167 63045 ovs.go:204] Exec(21): stderr: "" I1115 05:39:27.121176 63045 node.go:269] Node connection status = connected I1115 05:39:27.121183 63045 ovs.go:200] Exec(23): /usr/bin/ovs-vsctl --timeout=15 -- br-exists br-int I1115 05:39:27.132207 63045 services_controller.go:241] Processing sync for service default/kubernetes I1115 05:39:27.132230 63045 services_controller.go:280] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 75b8e8a4-42f7-4abc-b0ba-5afada275bf7 198 0 2022-11-15 05:38:37 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] [{microshift Update v1 2022-11-15 05:38:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.132345 63045 kube.go:303] Getting endpoints for slice default/kubernetes I1115 05:39:27.132353 63045 kube.go:330] Adding slice kubernetes endpoints: [10.0.0.2], port: 6443 I1115 05:39:27.132360 63045 kube.go:346] LB Endpoints for default/kubernetes are: [10.0.0.2] / [] on port: 6443 I1115 05:39:27.132368 63045 services_controller.go:296] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.132375 63045 services_controller.go:297] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.1"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{"10.0.0.2"}, V6IPs:[]string{}, Port:6443}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.132411 63045 services_controller.go:303] Built service default/kubernetes cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.132421 63045 services_controller.go:304] Built service default/kubernetes per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.132456 63045 services_controller.go:305] Service default/kubernetes has 0 cluster-wide and 1 per-node configs, making 0 and 2 load balancers I1115 05:39:27.132465 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"c4d96c7e-1529-457a-9c00-dbe41d077136", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.132628 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.132698 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996212}] I1115 05:39:27.132737 63045 services_controller.go:245] Finished syncing service kubernetes on namespace default : 536.39µs I1115 05:39:27.132756 63045 services_controller.go:218] "Error syncing service, retrying" service="default/kubernetes" err="failed to ensure service default/kubernetes load balancers: object not found" I1115 05:39:27.132772 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:27.132778 63045 services_controller.go:280] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns b3f0582a-7984-43ab-a7fb-59fcf4f905df 327 0 2022-11-15 05:39:02 +0000 UTC map[] map[operator.openshift.io/spec-hash:c387daddabfc2dde4f8d3747fd4d4cc94e257885202c60f32bade610838704c3 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{},"f:service.beta.openshift.io/serving-cert-secret-name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9154,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.132849 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.132856 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.132863 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.132869 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.132875 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.132881 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.132889 63045 services_controller.go:296] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.132895 63045 services_controller.go:297] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"UDP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:9154, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.132922 63045 services_controller.go:303] Built service openshift-dns/dns-default cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.132933 63045 services_controller.go:304] Built service openshift-dns/dns-default per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.132964 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:27.132977 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"2c790ce1-a33c-4f51-9824-b25b0b77e391", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"69bf140c-11c9-48c5-ba36-a8ff6921bf07", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.133082 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996213}] I1115 05:39:27.133115 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996214}] I1115 05:39:27.133163 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996213} {GoUUID:u2596996214}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.133189 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 416.124µs I1115 05:39:27.133201 63045 services_controller.go:218] "Error syncing service, retrying" service="openshift-dns/dns-default" err="failed to ensure service openshift-dns/dns-default load balancers: object not found" I1115 05:39:27.133264 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Gateway_Chassis Row:map[chassis_name:77436c83-1258-484f-b8d8-ec91acb3c8f3 name:rtos-release-ci-ci-op-k5cwk1pv-7cb14-77436c83-1258-484f-b8d8-ec91acb3c8f3 priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996215}] I1115 05:39:27.133310 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:u2596996215}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.133325 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Gateway_Chassis Row:map[chassis_name:77436c83-1258-484f-b8d8-ec91acb3c8f3 name:rtos-release-ci-ci-op-k5cwk1pv-7cb14-77436c83-1258-484f-b8d8-ec91acb3c8f3 priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996215} {Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:u2596996215}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.133374 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Gateway_Chassis Row:map[chassis_name:77436c83-1258-484f-b8d8-ec91acb3c8f3 name:rtos-release-ci-ci-op-k5cwk1pv-7cb14-77436c83-1258-484f-b8d8-ec91acb3c8f3 priority:1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996215} {Op:update Table:Logical_Router_Port Row:map[gateway_chassis:{GoSet:[{GoUUID:u2596996215}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5e6913a8-e256-48aa-8cb4-bd6613d3ba1f}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.136308 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="0db1f757-3d50-4b20-a8e6-1b9eefa7986c" I1115 05:39:27.136405 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="0db1f757-3d50-4b20-a8e6-1b9eefa7986c" "new"="&{UUID:0db1f757-3d50-4b20-a8e6-1b9eefa7986c Chassis: Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:stor-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[0a:58:0a:2a:00:01 10.42.0.1 is_chassis_resident(\"cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14\")] Options:map[peer:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc000964d20 VirtualParent:}" "old"="&{UUID:0db1f757-3d50-4b20-a8e6-1b9eefa7986c Chassis: Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:stor-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[peer:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc000964d40 VirtualParent:}" I1115 05:39:27.136425 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="6c7e5a94-dfdd-4ffe-971f-d215f164aac7" I1115 05:39:27.136471 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="6c7e5a94-dfdd-4ffe-971f-d215f164aac7" "new"="&{UUID:6c7e5a94-dfdd-4ffe-971f-d215f164aac7 Chassis: Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtos-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:0a:2a:00:01 10.42.0.1/24] NatAddresses:[] Options:map[chassis-redirect-port:cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14 peer:stor-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc000964fa0 VirtualParent:}" "old"="&{UUID:6c7e5a94-dfdd-4ffe-971f-d215f164aac7 Chassis: Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtos-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:0a:2a:00:01 10.42.0.1/24] NatAddresses:[] Options:map[peer:stor-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:patch Up:0xc000964fc0 VirtualParent:}" I1115 05:39:27.136486 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="f058c4f7-e765-45d1-9a03-85d3609cf488" I1115 05:39:27.136538 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="f058c4f7-e765-45d1-9a03-85d3609cf488" "model"="&{UUID:f058c4f7-e765-45d1-9a03-85d3609cf488 Chassis: Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup:0xc00095abe0 LogicalPort:cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:0a:2a:00:01 10.42.0.1/24] NatAddresses:[] Options:map[always-redirect:true distributed-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:3 Type:chassisredirect Up:0xc000965228 VirtualParent:}" I1115 05:39:27.136642 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Gateway_Chassis" "uuid"="55405ea0-c702-4d57-aa9f-32b1e4594f94" I1115 05:39:27.136681 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Gateway_Chassis" "uuid"="55405ea0-c702-4d57-aa9f-32b1e4594f94" "model"="&{UUID:55405ea0-c702-4d57-aa9f-32b1e4594f94 ChassisName:77436c83-1258-484f-b8d8-ec91acb3c8f3 ExternalIDs:map[] Name:rtos-release-ci-ci-op-k5cwk1pv-7cb14-77436c83-1258-484f-b8d8-ec91acb3c8f3 Options:map[] Priority:1}" I1115 05:39:27.136703 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="5e6913a8-e256-48aa-8cb4-bd6613d3ba1f" I1115 05:39:27.136737 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="5e6913a8-e256-48aa-8cb4-bd6613d3ba1f" "new"="&{UUID:5e6913a8-e256-48aa-8cb4-bd6613d3ba1f Enabled: ExternalIDs:map[] GatewayChassis:[55405ea0-c702-4d57-aa9f-32b1e4594f94] HaChassisGroup: Ipv6Prefix:[] Ipv6RaConfigs:map[] MAC:0a:58:0a:2a:00:01 Name:rtos-release-ci-ci-op-k5cwk1pv-7cb14 Networks:[10.42.0.1/24] Options:map[] Peer:}" "old"="&{UUID:5e6913a8-e256-48aa-8cb4-bd6613d3ba1f Enabled: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: Ipv6Prefix:[] Ipv6RaConfigs:map[] MAC:0a:58:0a:2a:00:01 Name:rtos-release-ci-ci-op-k5cwk1pv-7cb14 Networks:[10.42.0.1/24] Options:map[] Peer:}" I1115 05:39:27.136820 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:ACL Row:map[action:allow-related direction:to-lport log:false match:ip4.src==10.42.0.2 meter:{GoSet:[acl-logging]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996216}] I1115 05:39:27.136872 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996216}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.136888 63045 transact.go:41] Configuring OVN: [{Op:insert Table:ACL Row:map[action:allow-related direction:to-lport log:false match:ip4.src==10.42.0.2 meter:{GoSet:[acl-logging]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996216} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996216}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.136935 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:ACL Row:map[action:allow-related direction:to-lport log:false match:ip4.src==10.42.0.2 meter:{GoSet:[acl-logging]} priority:1001] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996216} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:acls Mutator:insert Value:{GoSet:[{GoUUID:u2596996216}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.137354 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:27.137418 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" I1115 05:39:27.137461 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="ACL" "uuid"="4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd" I1115 05:39:27.137486 63+ kubectl logs --previous=true -n openshift-ovn-kubernetes pod/ovnkube-master-kdsb7 ovnkube-master 045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="ACL" "uuid"="4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd" "model"="&{UUID:4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd Action:allow-related Direction:to-lport ExternalIDs:map[] Label:0 Log:false Match:ip4.src==10.42.0.2 Meter:0xc00097c340 Name: Options:map[] Priority:1001 Severity:}" I1115 05:39:27.137559 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:10.42.0.0/24 nexthop:10.42.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996217}] I1115 05:39:27.137601 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996217}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.137620 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:10.42.0.0/24 nexthop:10.42.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996217} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996217}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.137661 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:10.42.0.0/24 nexthop:10.42.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996217} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996217}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.140064 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:27.140139 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[cf65849c-8e5f-491e-bdfa-533d51a6a614]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[]}" I1115 05:39:27.140155 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="cf65849c-8e5f-491e-bdfa-533d51a6a614" I1115 05:39:27.140184 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="cf65849c-8e5f-491e-bdfa-533d51a6a614" "model"="&{UUID:cf65849c-8e5f-491e-bdfa-533d51a6a614 BFD: ExternalIDs:map[] IPPrefix:10.42.0.0/24 Nexthop:10.42.0.2 Options:map[] OutputPort: Policy:0xc00097d0a0 RouteTable:}" I1115 05:39:27.140250 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[52:17:ad:e6:11:7d 10.42.0.2]} name:k8s-release-ci-ci-op-k5cwk1pv-7cb14] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996218}] I1115 05:39:27.140294 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996218}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.140312 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[52:17:ad:e6:11:7d 10.42.0.2]} name:k8s-release-ci-ci-op-k5cwk1pv-7cb14] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996218} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996218}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.140355 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[52:17:ad:e6:11:7d 10.42.0.2]} name:k8s-release-ci-ci-op-k5cwk1pv-7cb14] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996218} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996218}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.140701 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="c690cf3a-b770-41dc-97e7-f0d8a5d9708e" I1115 05:39:27.140746 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="c690cf3a-b770-41dc-97e7-f0d8a5d9708e" "model"="&{UUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e Addresses:[52:17:ad:e6:11:7d 10.42.0.2] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:k8s-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[] ParentName: PortSecurity:[] Tag: TagRequest: Type: Up:}" I1115 05:39:27.140766 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:27.140809 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace c690cf3a-b770-41dc-97e7-f0d8a5d9708e] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace] QOSRules:[]}" I1115 05:39:27.140890 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e}]}}] Timeout: Where:[where column _uuid == {69f41db0-3b67-40ce-a811-31a29b2cc642}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.140928 63045 transact.go:41] Configuring OVN: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e}]}}] Timeout: Where:[where column _uuid == {69f41db0-3b67-40ce-a811-31a29b2cc642}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.140955 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e}]}}] Timeout: Where:[where column _uuid == {69f41db0-3b67-40ce-a811-31a29b2cc642}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.141201 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="69f41db0-3b67-40ce-a811-31a29b2cc642" I1115 05:39:27.141245 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Port_Group" "uuid"="69f41db0-3b67-40ce-a811-31a29b2cc642" "new"="&{UUID:69f41db0-3b67-40ce-a811-31a29b2cc642 ACLs:[494a3f8a-8a8c-4041-bce9-4640c20a3f3c eb735fe2-016e-43d2-bbd3-0295cc799cd3] ExternalIDs:map[name:clusterPortGroup] Name:clusterPortGroup Ports:[c690cf3a-b770-41dc-97e7-f0d8a5d9708e]}" "old"="&{UUID:69f41db0-3b67-40ce-a811-31a29b2cc642 ACLs:[494a3f8a-8a8c-4041-bce9-4640c20a3f3c eb735fe2-016e-43d2-bbd3-0295cc799cd3] ExternalIDs:map[name:clusterPortGroup] Name:clusterPortGroup Ports:[]}" I1115 05:39:27.141281 63045 util.go:291] Hybridoverlay port does not exist for node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:27.141294 63045 util.go:300] haveMP true haveHO false ManagementPortAddress 10.42.0.2/24 HybridOverlayAddressOA 10.42.0.3/24 I1115 05:39:27.141349 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.141370 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.141405 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Switch Row:map[other_config:{GoMap:map[mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.152580 63045 services_controller.go:241] Processing sync for service default/kubernetes I1115 05:39:27.152606 63045 services_controller.go:280] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 75b8e8a4-42f7-4abc-b0ba-5afada275bf7 198 0 2022-11-15 05:38:37 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] [{microshift Update v1 2022-11-15 05:38:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.152733 63045 kube.go:303] Getting endpoints for slice default/kubernetes I1115 05:39:27.152741 63045 kube.go:330] Adding slice kubernetes endpoints: [10.0.0.2], port: 6443 I1115 05:39:27.152750 63045 kube.go:346] LB Endpoints for default/kubernetes are: [10.0.0.2] / [] on port: 6443 I1115 05:39:27.152759 63045 services_controller.go:296] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.152767 63045 services_controller.go:297] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.1"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{"10.0.0.2"}, V6IPs:[]string{}, Port:6443}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.152800 63045 services_controller.go:303] Built service default/kubernetes cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.152809 63045 services_controller.go:304] Built service default/kubernetes per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.152844 63045 services_controller.go:305] Service default/kubernetes has 0 cluster-wide and 1 per-node configs, making 0 and 2 load balancers I1115 05:39:27.152854 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"c4d96c7e-1529-457a-9c00-dbe41d077136", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.152975 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.153043 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996219}] I1115 05:39:27.153096 63045 services_controller.go:245] Finished syncing service kubernetes on namespace default : 523.024µs I1115 05:39:27.153120 63045 services_controller.go:218] "Error syncing service, retrying" service="default/kubernetes" err="failed to ensure service default/kubernetes load balancers: object not found" I1115 05:39:27.153139 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:27.153153 63045 services_controller.go:280] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns b3f0582a-7984-43ab-a7fb-59fcf4f905df 327 0 2022-11-15 05:39:02 +0000 UTC map[] map[operator.openshift.io/spec-hash:c387daddabfc2dde4f8d3747fd4d4cc94e257885202c60f32bade610838704c3 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{},"f:service.beta.openshift.io/serving-cert-secret-name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9154,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.153240 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.153245 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.153251 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.153255 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.153259 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.153263 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.153268 63045 services_controller.go:296] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.153273 63045 services_controller.go:297] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"UDP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:9154, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.153300 63045 services_controller.go:303] Built service openshift-dns/dns-default cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.153307 63045 services_controller.go:304] Built service openshift-dns/dns-default per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.153338 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:27.153350 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"2c790ce1-a33c-4f51-9824-b25b0b77e391", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"69bf140c-11c9-48c5-ba36-a8ff6921bf07", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.153437 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996220}] I1115 05:39:27.153471 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996221}] I1115 05:39:27.153541 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996220} {GoUUID:u2596996221}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.153577 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 439.679µs I1115 05:39:27.153593 63045 services_controller.go:218] "Error syncing service, retrying" service="openshift-dns/dns-default" err="failed to ensure service openshift-dns/dns-default load balancers: object not found" I1115 05:39:27.154468 63045 ovs.go:203] Exec(22): stdout: "1\n" I1115 05:39:27.154485 63045 ovs.go:204] Exec(22): stderr: "" I1115 05:39:27.154514 63045 ovs.go:200] Exec(24): /usr/bin/ovs-ofctl --no-stats --no-names dump-flows br-int table=65,out_port=1 I1115 05:39:27.168136 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="f058c4f7-e765-45d1-9a03-85d3609cf488" I1115 05:39:27.168245 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="f058c4f7-e765-45d1-9a03-85d3609cf488" "new"="&{UUID:f058c4f7-e765-45d1-9a03-85d3609cf488 Chassis:0xc0009eb3a0 Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup:0xc0009eb370 LogicalPort:cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:0a:2a:00:01 10.42.0.1/24] NatAddresses:[] Options:map[always-redirect:true distributed-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:3 Type:chassisredirect Up:0xc000a36340 VirtualParent:}" "old"="&{UUID:f058c4f7-e765-45d1-9a03-85d3609cf488 Chassis: Datapath:831330e5-54ca-447f-b92c-4caa0cfec9eb Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup:0xc0009eb3b0 LogicalPort:cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:0a:2a:00:01 10.42.0.1/24] NatAddresses:[] Options:map[always-redirect:true distributed-port:rtos-release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:3 Type:chassisredirect Up:0xc000a36350 VirtualParent:}" I1115 05:39:27.168311 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="ca39f1f4-b2d6-421f-9354-db71d4e96db6" I1115 05:39:27.168392 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="ca39f1f4-b2d6-421f-9354-db71d4e96db6" "model"="&{UUID:ca39f1f4-b2d6-421f-9354-db71d4e96db6 Chassis: Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:k8s-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[52:17:ad:e6:11:7d 10.42.0.2] NatAddresses:[] Options:map[] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type: Up:0xc000a367e8 VirtualParent:}" I1115 05:39:27.168445 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="ca39f1f4-b2d6-421f-9354-db71d4e96db6" I1115 05:39:27.168509 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="ca39f1f4-b2d6-421f-9354-db71d4e96db6" "new"="&{UUID:ca39f1f4-b2d6-421f-9354-db71d4e96db6 Chassis:0xc0009ebb60 Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:k8s-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[52:17:ad:e6:11:7d 10.42.0.2] NatAddresses:[] Options:map[] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type: Up:0xc000a36d50 VirtualParent:}" "old"="&{UUID:ca39f1f4-b2d6-421f-9354-db71d4e96db6 Chassis: Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:k8s-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[52:17:ad:e6:11:7d 10.42.0.2] NatAddresses:[] Options:map[] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type: Up:0xc000a36d88 VirtualParent:}" I1115 05:39:27.168621 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:27.168689 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace c690cf3a-b770-41dc-97e7-f0d8a5d9708e] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[exclude_ips:10.42.0.2 mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace c690cf3a-b770-41dc-97e7-f0d8a5d9708e] QOSRules:[]}" I1115 05:39:27.168737 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="c690cf3a-b770-41dc-97e7-f0d8a5d9708e" I1115 05:39:27.168780 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="c690cf3a-b770-41dc-97e7-f0d8a5d9708e" "new"="&{UUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e Addresses:[52:17:ad:e6:11:7d 10.42.0.2] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:k8s-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[] ParentName: PortSecurity:[] Tag: TagRequest: Type: Up:0xc000a37848}" "old"="&{UUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e Addresses:[52:17:ad:e6:11:7d 10.42.0.2] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:k8s-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[] ParentName: PortSecurity:[] Tag: TagRequest: Type: Up:}" I1115 05:39:27.168917 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:361a0ade-a33a-40af-b4f6-dc5a6910be61}]} external_ids:{GoMap:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2]} load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} name:GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996222}] I1115 05:39:27.168950 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Router Row:map[] Rows:[map[name:GR_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000a37b40 Where:[where column name == GR_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:361a0ade-a33a-40af-b4f6-dc5a6910be61}]} external_ids:{GoMap:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2]} load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} name:GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996222}] I1115 05:39:27.169006 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Router Row:map[] Rows:[map[name:GR_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000a37b40 Where:[where column name == GR_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:361a0ade-a33a-40af-b4f6-dc5a6910be61}]} external_ids:{GoMap:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2]} load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} name:GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996222}]" I1115 05:39:27.169108 63045 ovs.go:203] Exec(23): stdout: "" I1115 05:39:27.169119 63045 ovs.go:204] Exec(23): stderr: "" I1115 05:39:27.169127 63045 ovs.go:200] Exec(25): /usr/bin/ovs-ofctl dump-aggregate br-int I1115 05:39:27.196514 63045 services_controller.go:241] Processing sync for service default/kubernetes I1115 05:39:27.196544 63045 services_controller.go:280] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 75b8e8a4-42f7-4abc-b0ba-5afada275bf7 198 0 2022-11-15 05:38:37 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] [{microshift Update v1 2022-11-15 05:38:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.196654 63045 kube.go:303] Getting endpoints for slice default/kubernetes I1115 05:39:27.196662 63045 kube.go:330] Adding slice kubernetes endpoints: [10.0.0.2], port: 6443 I1115 05:39:27.196669 63045 kube.go:346] LB Endpoints for default/kubernetes are: [10.0.0.2] / [] on port: 6443 I1115 05:39:27.196680 63045 services_controller.go:296] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.196688 63045 services_controller.go:297] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.1"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{"10.0.0.2"}, V6IPs:[]string{}, Port:6443}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.196722 63045 services_controller.go:303] Built service default/kubernetes cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.196728 63045 services_controller.go:304] Built service default/kubernetes per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.196763 63045 services_controller.go:305] Service default/kubernetes has 0 cluster-wide and 1 per-node configs, making 0 and 2 load balancers I1115 05:39:27.196772 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"c4d96c7e-1529-457a-9c00-dbe41d077136", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.196900 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.196968 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996223}] I1115 05:39:27.197004 63045 services_controller.go:245] Finished syncing service kubernetes on namespace default : 513.567µs I1115 05:39:27.197024 63045 services_controller.go:218] "Error syncing service, retrying" service="default/kubernetes" err="failed to ensure service default/kubernetes load balancers: object not found" I1115 05:39:27.197040 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:27.197051 63045 services_controller.go:280] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns b3f0582a-7984-43ab-a7fb-59fcf4f905df 327 0 2022-11-15 05:39:02 +0000 UTC map[] map[operator.openshift.io/spec-hash:c387daddabfc2dde4f8d3747fd4d4cc94e257885202c60f32bade610838704c3 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{},"f:service.beta.openshift.io/serving-cert-secret-name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9154,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.197125 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.197130 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.197135 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.197140 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.197143 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.197147 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.197153 63045 services_controller.go:296] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.197158 63045 services_controller.go:297] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"UDP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:9154, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.197182 63045 services_controller.go:303] Built service openshift-dns/dns-default cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.197192 63045 services_controller.go:304] Built service openshift-dns/dns-default per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.197222 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:27.197251 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"2c790ce1-a33c-4f51-9824-b25b0b77e391", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"69bf140c-11c9-48c5-ba36-a8ff6921bf07", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.197338 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996224}] I1115 05:39:27.197369 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996225}] I1115 05:39:27.197414 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996224} {GoUUID:u2596996225}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.197441 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 399.874µs I1115 05:39:27.197454 63045 services_controller.go:218] "Error syncing service, retrying" service="openshift-dns/dns-default" err="failed to ensure service openshift-dns/dns-default load balancers: object not found" I1115 05:39:27.197601 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="565a72a5-31f1-429b-8168-c81f18c77757" I1115 05:39:27.197638 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="565a72a5-31f1-429b-8168-c81f18c77757" "model"="&{UUID:565a72a5-31f1-429b-8168-c81f18c77757 ExternalIDs:map[always_learn_from_arp_request:false logical-router:5c5170fd-1297-4d21-9aac-501b448f04c1 name:GR_release-ci-ci-op-k5cwk1pv-7cb14 snat-ct-zone:0] LoadBalancers:[] TunnelKey:4}" I1115 05:39:27.197746 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.197785 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "model"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000a6a220 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[] StaticRoutes:[]}" I1115 05:39:27.197845 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996226}] I1115 05:39:27.197880 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996226}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.197897 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996226} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996226}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.197944 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} name:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996226} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996226}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.200044 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="65228b83-1f7d-43b9-96a4-3755c1794e4d" I1115 05:39:27.200106 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="65228b83-1f7d-43b9-96a4-3755c1794e4d" "model"="&{UUID:65228b83-1f7d-43b9-96a4-3755c1794e4d Chassis: Datapath:fec065db-9b49-4799-9d27-bae0364a24f2 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[peer:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:patch Up:0xc0008eb240 VirtualParent:}" I1115 05:39:27.200255 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="8cf571f0-00a4-41ca-97ae-ef814907077a" I1115 05:39:27.200290 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="8cf571f0-00a4-41ca-97ae-ef814907077a" "model"="&{UUID:8cf571f0-00a4-41ca-97ae-ef814907077a Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:27.200312 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="4a7292ab-0326-4438-83d1-4c6f4765fce0" I1115 05:39:27.200348 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="4a7292ab-0326-4438-83d1-4c6f4765fce0" "new"="&{UUID:4a7292ab-0326-4438-83d1-4c6f4765fce0 ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:join OtherConfig:map[] Ports:[402f3397-e1d9-4671-bc59-c4e9a435a625 8cf571f0-00a4-41ca-97ae-ef814907077a] QOSRules:[]}" "old"="&{UUID:4a7292ab-0326-4438-83d1-4c6f4765fce0 ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:join OtherConfig:map[] Ports:[402f3397-e1d9-4671-bc59-c4e9a435a625] QOSRules:[]}" I1115 05:39:27.200413 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 name:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[100.64.0.2/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996227}] I1115 05:39:27.200458 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996227}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.200476 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 name:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[100.64.0.2/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996227} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996227}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.200534 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 name:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[100.64.0.2/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996227} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996227}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.200968 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="ad17b3d7-02ae-4d31-a20c-ac73ecf7280f" I1115 05:39:27.201008 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="ad17b3d7-02ae-4d31-a20c-ac73ecf7280f" "model"="&{UUID:ad17b3d7-02ae-4d31-a20c-ac73ecf7280f Enabled: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: Ipv6Prefix:[] Ipv6RaConfigs:map[] MAC:0a:58:64:40:00:02 Name:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 Networks:[100.64.0.2/16] Options:map[] Peer:}" I1115 05:39:27.201032 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.201082 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "new"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc0009302c0 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f] StaticRoutes:[]}" "old"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000930320 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[] StaticRoutes:[]}" I1115 05:39:27.201137 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="8cf571f0-00a4-41ca-97ae-ef814907077a" I1115 05:39:27.201173 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="8cf571f0-00a4-41ca-97ae-ef814907077a" "new"="&{UUID:8cf571f0-00a4-41ca-97ae-ef814907077a Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:0xc00094e000}" "old"="&{UUID:8cf571f0-00a4-41ca-97ae-ef814907077a Addresses:[router] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:27.201229 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:10.42.0.0/16 nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996228}] I1115 05:39:27.201266 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996228}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.201283 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:10.42.0.0/16 nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996228} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996228}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.201325 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:10.42.0.0/16 nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996228} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996228}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.207064 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="65228b83-1f7d-43b9-96a4-3755c1794e4d" I1115 05:39:27.207148 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="65228b83-1f7d-43b9-96a4-3755c1794e4d" "new"="&{UUID:65228b83-1f7d-43b9-96a4-3755c1794e4d Chassis: Datapath:fec065db-9b49-4799-9d27-bae0364a24f2 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc00094e7d0 VirtualParent:}" "old"="&{UUID:65228b83-1f7d-43b9-96a4-3755c1794e4d Chassis: Datapath:fec065db-9b49-4799-9d27-bae0364a24f2 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[peer:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:patch Up:0xc00094e800 VirtualParent:}" I1115 05:39:27.207167 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="e9a0a9d3-7f77-44b1-9414-0197540f20b9" I1115 05:39:27.207200 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="e9a0a9d3-7f77-44b1-9414-0197540f20b9" "model"="&{UUID:e9a0a9d3-7f77-44b1-9414-0197540f20b9 Chassis: Datapath:565a72a5-31f1-429b-8168-c81f18c77757 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:64:40:00:02 100.64.0.2/16] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:l3gateway Up:0xc00094ea58 VirtualParent:}" I1115 05:39:27.207287 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.207334 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "new"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000931bf0 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9]}" "old"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000931c60 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f] StaticRoutes:[]}" I1115 05:39:27.207348 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="d6521a95-c7a3-4e9b-b5b0-5f28a72008a9" I1115 05:39:27.207368 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="d6521a95-c7a3-4e9b-b5b0-5f28a72008a9" "model"="&{UUID:d6521a95-c7a3-4e9b-b5b0-5f28a72008a9 BFD: ExternalIDs:map[] IPPrefix:10.42.0.0/16 Nexthop:100.64.0.1 Options:map[] OutputPort: Policy: RouteTable:}" I1115 05:39:27.207421 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:42:01:0a:00:00:02 name:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[10.0.0.2/32]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996229}] I1115 05:39:27.207468 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996229}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.207482 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:42:01:0a:00:00:02 name:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[10.0.0.2/32]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996229} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996229}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.207539 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:42:01:0a:00:00:02 name:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 networks:{GoSet:[10.0.0.2/32]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996229} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996229}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.207933 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="5b6e2b30-c89d-433b-9fbe-7761cab3f2f1" I1115 05:39:27.207973 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Port" "uuid"="5b6e2b30-c89d-433b-9fbe-7761cab3f2f1" "model"="&{UUID:5b6e2b30-c89d-433b-9fbe-7761cab3f2f1 Enabled: ExternalIDs:map[gateway-physical-ip:yes] GatewayChassis:[] HaChassisGroup: Ipv6Prefix:[] Ipv6RaConfigs:map[] MAC:42:01:0a:00:00:02 Name:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 Networks:[10.0.0.2/32] Options:map[] Peer:}" I1115 05:39:27.207997 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.208047 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "new"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000978ec0 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9]}" "old"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000978f40 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9]}" I1115 05:39:27.208112 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} name:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[network_name:physnet]} tag_request:{GoSet:[0]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996230}] I1115 05:39:27.208149 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[42:01:0a:00:00:02]} name:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996231}] I1115 05:39:27.208213 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Switch Row:map[name:ext_release-ci-ci-op-k5cwk1pv-7cb14 ports:{GoSet:[{GoUUID:u2596996230} {GoUUID:u2596996231}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996232}] I1115 05:39:27.208229 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Switch Row:map[] Rows:[map[name:ext_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc00094f9e0 Where:[where column name == ext_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} name:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[network_name:physnet]} tag_request:{GoSet:[0]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996230} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[42:01:0a:00:00:02]} name:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996231} {Op:insert Table:Logical_Switch Row:map[name:ext_release-ci-ci-op-k5cwk1pv-7cb14 ports:{GoSet:[{GoUUID:u2596996230} {GoUUID:u2596996231}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996232}] I1115 05:39:27.208349 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Switch Row:map[] Rows:[map[name:ext_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc00094f9e0 Where:[where column name == ext_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} name:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[network_name:physnet]} tag_request:{GoSet:[0]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996230} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[42:01:0a:00:00:02]} name:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996231} {Op:insert Table:Logical_Switch Row:map[name:ext_release-ci-ci-op-k5cwk1pv-7cb14 ports:{GoSet:[{GoUUID:u2596996230} {GoUUID:u2596996231}]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996232}]" I1115 05:39:27.208807 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="d93174fc-1e0a-4f3a-bb24-e656a84b1816" I1115 05:39:27.208847 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="d93174fc-1e0a-4f3a-bb24-e656a84b1816" "model"="&{UUID:d93174fc-1e0a-4f3a-bb24-e656a84b1816 Addresses:[unknown] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[network_name:physnet] ParentName: PortSecurity:[] Tag: TagRequest:0xc0009ce688 Type:localnet Up:}" I1115 05:39:27.208863 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="e78d83a5-a48f-440a-bed3-69268ccf9f56" I1115 05:39:27.208885 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="e78d83a5-a48f-440a-bed3-69268ccf9f56" "model"="&{UUID:e78d83a5-a48f-440a-bed3-69268ccf9f56 Addresses:[42:01:0a:00:00:02] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:27.208898 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="6c38102e-0494-490b-8261-cf3b14dff19d" I1115 05:39:27.208925 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="6c38102e-0494-490b-8261-cf3b14dff19d" "model"="&{UUID:6c38102e-0494-490b-8261-cf3b14dff19d ACLs:[] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[] LoadBalancerGroup:[] Name:ext_release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[] Ports:[d93174fc-1e0a-4f3a-bb24-e656a84b1816 e78d83a5-a48f-440a-bed3-69268ccf9f56] QOSRules:[]}" I1115 05:39:27.208988 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:0.0.0.0/0 nexthop:10.0.0.1 output_port:{GoSet:[rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996233}] I1115 05:39:27.209021 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996233}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.209039 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:0.0.0.0/0 nexthop:10.0.0.1 output_port:{GoSet:[rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996233} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996233}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.209080 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:0.0.0.0/0 nexthop:10.0.0.1 output_port:{GoSet:[rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996233} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996233}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.217689 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="273fffd0-48ca-49c6-92b3-e3d72b863200" I1115 05:39:27.217733 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="273fffd0-48ca-49c6-92b3-e3d72b863200" "model"="&{UUID:273fffd0-48ca-49c6-92b3-e3d72b863200 BFD: ExternalIDs:map[] IPPrefix:0.0.0.0/0 Nexthop:10.0.0.1 Options:map[] OutputPort:0xc0009e9570 Policy: RouteTable:}" I1115 05:39:27.217750 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.217801 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "new"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc0009e9730 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9 273fffd0-48ca-49c6-92b3-e3d72b863200]}" "old"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc0009e97a0 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9]}" I1115 05:39:27.217861 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="d93174fc-1e0a-4f3a-bb24-e656a84b1816" I1115 05:39:27.217904 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="d93174fc-1e0a-4f3a-bb24-e656a84b1816" "new"="&{UUID:d93174fc-1e0a-4f3a-bb24-e656a84b1816 Addresses:[unknown] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[network_name:physnet] ParentName: PortSecurity:[] Tag: TagRequest:0xc0009cf230 Type:localnet Up:0xc0009cf240}" "old"="&{UUID:d93174fc-1e0a-4f3a-bb24-e656a84b1816 Addresses:[unknown] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[network_name:physnet] ParentName: PortSecurity:[] Tag: TagRequest:0xc0009cf250 Type:localnet Up:}" I1115 05:39:27.217920 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="e78d83a5-a48f-440a-bed3-69268ccf9f56" I1115 05:39:27.217952 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="e78d83a5-a48f-440a-bed3-69268ccf9f56" "new"="&{UUID:e78d83a5-a48f-440a-bed3-69268ccf9f56 Addresses:[42:01:0a:00:00:02] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:0xc0009cf400}" "old"="&{UUID:e78d83a5-a48f-440a-bed3-69268ccf9f56 Addresses:[42:01:0a:00:00:02] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentName: PortSecurity:[] Tag: TagRequest: Type:router Up:}" I1115 05:39:27.218000 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:100.64.0.2 nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996234}] I1115 05:39:27.218040 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996234}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.218062 63045 transact.go:41] Configuring OVN: [{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:100.64.0.2 nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996234} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996234}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.218104 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:insert Table:Logical_Router_Static_Route Row:map[ip_prefix:100.64.0.2 nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996234} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:u2596996234}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.218445 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="5f57312a-3e27-4593-a961-66666188af7b" I1115 05:39:27.218478 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Static_Route" "uuid"="5f57312a-3e27-4593-a961-66666188af7b" "model"="&{UUID:5f57312a-3e27-4593-a961-66666188af7b BFD: ExternalIDs:map[] IPPrefix:100.64.0.2 Nexthop:100.64.0.2 Options:map[] OutputPort: Policy: RouteTable:}" I1115 05:39:27.218523 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:27.218584 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[cf65849c-8e5f-491e-bdfa-533d51a6a614 5f57312a-3e27-4593-a961-66666188af7b]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[cf65849c-8e5f-491e-bdfa-533d51a6a614]}" I1115 05:39:27.220883 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="055c383e-9a3c-44b9-bfeb-2ce5e04fe0db" I1115 05:39:27.220940 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="055c383e-9a3c-44b9-bfeb-2ce5e04fe0db" "model"="&{UUID:055c383e-9a3c-44b9-bfeb-2ce5e04fe0db Chassis: Datapath:565a72a5-31f1-429b-8168-c81f18c77757 Encap: ExternalIDs:map[gateway-physical-ip:yes] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[42:01:0a:00:00:02 10.0.0.2/32] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000b10378 VirtualParent:}" I1115 05:39:27.220990 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="692037f9-888f-43d5-8307-38a50e947f76" I1115 05:39:27.221026 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="692037f9-888f-43d5-8307-38a50e947f76" "model"="&{UUID:692037f9-888f-43d5-8307-38a50e947f76 Chassis: Datapath:6509bf50-18c5-4f84-915a-4d0c42f51967 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[42:01:0a:00:00:02] NatAddresses:[42:01:0a:00:00:02 10.0.0.2] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000b10550 VirtualParent:}" I1115 05:39:27.221044 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="7c5d6270-902c-4e4f-86bf-97ab2e5d2cb5" I1115 05:39:27.221074 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="7c5d6270-902c-4e4f-86bf-97ab2e5d2cb5" "model"="&{UUID:7c5d6270-902c-4e4f-86bf-97ab2e5d2cb5 Chassis: Datapath:6509bf50-18c5-4f84-915a-4d0c42f51967 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:br-ex_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[unknown] NatAddresses:[] Options:map[network_name:physnet] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:localnet Up:0xc000b106d0 VirtualParent:}" I1115 05:39:27.221098 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="6509bf50-18c5-4f84-915a-4d0c42f51967" I1115 05:39:27.221121 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Southbound" "table"="Datapath_Binding" "uuid"="6509bf50-18c5-4f84-915a-4d0c42f51967" "model"="&{UUID:6509bf50-18c5-4f84-915a-4d0c42f51967 ExternalIDs:map[logical-switch:6c38102e-0494-490b-8261-cf3b14dff19d name:ext_release-ci-ci-op-k5cwk1pv-7cb14] LoadBalancers:[] TunnelKey:5}" I1115 05:39:27.221191 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="65228b83-1f7d-43b9-96a4-3755c1794e4d" I1115 05:39:27.221265 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="65228b83-1f7d-43b9-96a4-3755c1794e4d" "new"="&{UUID:65228b83-1f7d-43b9-96a4-3755c1794e4d Chassis:0xc000b18480 Datapath:fec065db-9b49-4799-9d27-bae0364a24f2 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000b10a50 VirtualParent:}" "old"="&{UUID:65228b83-1f7d-43b9-96a4-3755c1794e4d Chassis: Datapath:fec065db-9b49-4799-9d27-bae0364a24f2 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[router] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000b10a60 VirtualParent:}" I1115 05:39:27.221285 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="e9a0a9d3-7f77-44b1-9414-0197540f20b9" I1115 05:39:27.221324 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="e9a0a9d3-7f77-44b1-9414-0197540f20b9" "new"="&{UUID:e9a0a9d3-7f77-44b1-9414-0197540f20b9 Chassis:0xc000b187f0 Datapath:565a72a5-31f1-429b-8168-c81f18c77757 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:64:40:00:02 100.64.0.2/16] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:l3gateway Up:0xc000b10cc0 VirtualParent:}" "old"="&{UUID:e9a0a9d3-7f77-44b1-9414-0197540f20b9 Chassis: Datapath:565a72a5-31f1-429b-8168-c81f18c77757 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[0a:58:64:40:00:02 100.64.0.2/16] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:1 Type:l3gateway Up:0xc000b10cd0 VirtualParent:}" I1115 05:39:27.221881 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Logical_Router_Policy Row:map[action:reroute match:inport == "rtos-release-ci-ci-op-k5cwk1pv-7cb14" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */ nexthops:{GoSet:[10.42.0.2]} priority:1004] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996235}] I1115 05:39:27.221929 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996235}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.221950 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:inport == "rtos-release-ci-ci-op-k5cwk1pv-7cb14" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */ priority:1004]] Columns:[priority match] Mutations:[] Timeout:0xc000b11df8 Where:[where column priority == 1004 where column match == inport == "rtos-release-ci-ci-op-k5cwk1pv-7cb14" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:reroute match:inport == "rtos-release-ci-ci-op-k5cwk1pv-7cb14" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */ nexthops:{GoSet:[10.42.0.2]} priority:1004] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996235} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996235}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.222014 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Logical_Router_Policy Row:map[] Rows:[map[match:inport == \"rtos-release-ci-ci-op-k5cwk1pv-7cb14\" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */ priority:1004]] Columns:[priority match] Mutations:[] Timeout:0xc000b11df8 Where:[where column priority == 1004 where column match == inport == \"rtos-release-ci-ci-op-k5cwk1pv-7cb14\" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Router_Policy Row:map[action:reroute match:inport == \"rtos-release-ci-ci-op-k5cwk1pv-7cb14\" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */ nexthops:{GoSet:[10.42.0.2]} priority:1004] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996235} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:u2596996235}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.224783 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="8994edf3-c344-4f1b-82f0-335a38705a12" I1115 05:39:27.224825 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Logical_Router_Policy" "uuid"="8994edf3-c344-4f1b-82f0-335a38705a12" "model"="&{UUID:8994edf3-c344-4f1b-82f0-335a38705a12 Action:reroute ExternalIDs:map[] Match:inport == \"rtos-release-ci-ci-op-k5cwk1pv-7cb14\" && ip4.dst == 10.0.0.2 /* release-ci-ci-op-k5cwk1pv-7cb14 */ Nexthop: Nexthops:[10.42.0.2] Options:map[] Priority:1004}" I1115 05:39:27.224841 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" I1115 05:39:27.224886 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5" "new"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898 8994edf3-c344-4f1b-82f0-335a38705a12] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[cf65849c-8e5f-491e-bdfa-533d51a6a614 5f57312a-3e27-4593-a961-66666188af7b]}" "old"="&{UUID:0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5 Copp: Enabled: ExternalIDs:map[k8s-cluster-router:yes k8s-ovn-topo-version:5] LoadBalancer:[] LoadBalancerGroup:[] Name:ovn_cluster_router Nat:[] Options:map[mcast_relay:true] Policies:[7969e67a-bd78-4152-9fcb-9a1dda1b1bf8 a3f0805d-e3db-4a3a-86f8-49cb514ad8f9 49b06b1c-7370-4cfc-8440-1876c45a4898] Ports:[d132d1a8-41f8-430a-a203-0d05cafce999 5e6913a8-e256-48aa-8cb4-bd6613d3ba1f] StaticRoutes:[cf65849c-8e5f-491e-bdfa-533d51a6a614 5f57312a-3e27-4593-a961-66666188af7b]}" I1115 05:39:27.226642 63045 master.go:1448] When adding node release-ci-ci-op-k5cwk1pv-7cb14, found 3 pods to add to retryPods I1115 05:39:27.226662 63045 master.go:1454] Adding pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 to retryPods I1115 05:39:27.226675 63045 master.go:1454] Adding pod openshift-dns/node-resolver-jhcw4 to retryPods I1115 05:39:27.226683 63045 master.go:1454] Adding pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 to retryPods I1115 05:39:27.226689 63045 obj_retry.go:195] Iterate retry objects requested (resource *v1.Pod) I1115 05:39:27.226711 63045 obj_retry.go:1219] Retry channel got triggered: retrying failed objects of type *v1.Pod I1115 05:39:27.226720 63045 obj_retry.go:1194] Going to retry *v1.Pod resource setup for 3 number of resources: [openshift-ovn-kubernetes/ovnkube-master-kdsb7 openshift-ovn-kubernetes/ovnkube-node-b5wd2 openshift-dns/node-resolver-jhcw4] I1115 05:39:27.226729 63045 obj_retry.go:1203] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources I1115 05:39:27.226744 63045 obj_retry.go:1106] Retry object setup: *v1.Pod openshift-dns/node-resolver-jhcw4 I1115 05:39:27.226760 63045 obj_retry.go:1157] Adding new object: *v1.Pod openshift-dns/node-resolver-jhcw4 I1115 05:39:27.226771 63045 obj_retry.go:1174] Retry successful for *v1.Pod openshift-dns/node-resolver-jhcw4 after 0 failed attempt(s) I1115 05:39:27.226775 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.226784 63045 obj_retry.go:1106] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:27.226792 63045 obj_retry.go:1157] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:27.226798 63045 obj_retry.go:1174] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 after 0 failed attempt(s) I1115 05:39:27.226802 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.226808 63045 obj_retry.go:1106] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 I1115 05:39:27.226815 63045 obj_retry.go:1157] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 I1115 05:39:27.226821 63045 obj_retry.go:1174] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 after 0 failed attempt(s) I1115 05:39:27.226824 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.226829 63045 obj_retry.go:1205] Function iterateRetryResources ended (in 109.567µs) I1115 05:39:27.239439 63045 services_controller.go:241] Processing sync for service default/kubernetes I1115 05:39:27.239462 63045 services_controller.go:280] Service kubernetes retrieved from lister: &Service{ObjectMeta:{kubernetes default 75b8e8a4-42f7-4abc-b0ba-5afada275bf7 198 0 2022-11-15 05:38:37 +0000 UTC map[component:apiserver provider:kubernetes] map[] [] [] [{microshift Update v1 2022-11-15 05:38:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.239589 63045 kube.go:303] Getting endpoints for slice default/kubernetes I1115 05:39:27.239597 63045 kube.go:330] Adding slice kubernetes endpoints: [10.0.0.2], port: 6443 I1115 05:39:27.239604 63045 kube.go:346] LB Endpoints for default/kubernetes are: [10.0.0.2] / [] on port: 6443 I1115 05:39:27.239612 63045 services_controller.go:296] Built service default/kubernetes LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.239623 63045 services_controller.go:297] Built service default/kubernetes LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.1"}, protocol:"TCP", inport:443, eps:util.LbEndpoints{V4IPs:[]string{"10.0.0.2"}, V6IPs:[]string{}, Port:6443}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.239658 63045 services_controller.go:303] Built service default/kubernetes cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.239669 63045 services_controller.go:304] Built service default/kubernetes per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.239702 63045 services_controller.go:305] Service default/kubernetes has 0 cluster-wide and 1 per-node configs, making 0 and 2 load balancers I1115 05:39:27.239713 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"c4d96c7e-1529-457a-9c00-dbe41d077136", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"169.254.169.2", Port:6443}}}}, Switches:[]string(nil), Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.1", Port:443}, Targets:[]loadbalancer.Addr{loadbalancer.Addr{IP:"10.0.0.2", Port:6443}}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}} I1115 05:39:27.239838 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.239904 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996236}] I1115 05:39:27.239954 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996236}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.239973 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000bcad28 Where:[where column name == Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996236} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996236}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.240119 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000bcad28 Where:[where column name == Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.1:443:10.0.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {c4d96c7e-1529-457a-9c00-dbe41d077136}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.1:443:169.254.169.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996236} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996236}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.240231 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:27.240241 63045 services_controller.go:280] Service dns-default retrieved from lister: &Service{ObjectMeta:{dns-default openshift-dns b3f0582a-7984-43ab-a7fb-59fcf4f905df 327 0 2022-11-15 05:39:02 +0000 UTC map[] map[operator.openshift.io/spec-hash:c387daddabfc2dde4f8d3747fd4d4cc94e257885202c60f32bade610838704c3 service.beta.openshift.io/serving-cert-secret-name:dns-default-metrics-tls] [] [] [{microshift Update v1 2022-11-15 05:39:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:operator.openshift.io/spec-hash":{},"f:service.beta.openshift.io/serving-cert-secret-name":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9154,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{1 0 dns},NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{1 0 dns-tcp},NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},} I1115 05:39:27.240419 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.240426 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.240431 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.240436 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.240447 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:27.240455 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:27.240461 63045 services_controller.go:296] Built service openshift-dns/dns-default LB cluster-wide configs []services.lbConfig(nil) I1115 05:39:27.240468 63045 services_controller.go:297] Built service openshift-dns/dns-default LB per-node configs []services.lbConfig{services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"UDP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:53, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{"10.43.0.10"}, protocol:"TCP", inport:9154, eps:util.LbEndpoints{V4IPs:[]string{}, V6IPs:[]string{}, Port:0}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}} I1115 05:39:27.240530 63045 services_controller.go:303] Built service openshift-dns/dns-default cluster-wide LB []loadbalancer.LB{} I1115 05:39:27.240541 63045 services_controller.go:304] Built service openshift-dns/dns-default per-node LB []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.240591 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:27.240605 63045 services_controller.go:316] Services do not match, existing lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"2c790ce1-a33c-4f51-9824-b25b0b77e391", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"69bf140c-11c9-48c5-ba36-a8ff6921bf07", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string(nil), Groups:[]string(nil)}}, built lbs: []loadbalancer.LB{loadbalancer.LB{Name:"Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"TCP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}, loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:9154}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}, loadbalancer.LB{Name:"Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14", UUID:"", Protocol:"UDP", ExternalIDs:map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"}, Opts:loadbalancer.LBOpts{Unidling:false, Affinity:false, SkipSNAT:false}, Rules:[]loadbalancer.LBRule{loadbalancer.LBRule{Source:loadbalancer.Addr{IP:"10.43.0.10", Port:53}, Targets:[]loadbalancer.Addr{}}}, Switches:[]string{"release-ci-ci-op-k5cwk1pv-7cb14"}, Routers:[]string{"GR_release-ci-ci-op-k5cwk1pv-7cb14"}, Groups:[]string(nil)}} I1115 05:39:27.240798 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996237}] I1115 05:39:27.240850 63045 model_client.go:345] Create operations generated as: [{Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996238}] I1115 05:39:27.240925 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996237} {GoUUID:u2596996238}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.241086 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996237} {GoUUID:u2596996238}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.241129 63045 model_client.go:379] Delete operations generated as: [{Op:delete Table:Load_Balancer Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69bf140c-11c9-48c5-ba36-a8ff6921bf07}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.241162 63045 model_client.go:379] Delete operations generated as: [{Op:delete Table:Load_Balancer Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2c790ce1-a33c-4f51-9824-b25b0b77e391}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.241233 63045 transact.go:41] Configuring OVN: [{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000bcbe20 Where:[where column name == Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996237} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996238} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996237} {GoUUID:u2596996238}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996237} {GoUUID:u2596996238}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:delete Table:Load_Balancer Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69bf140c-11c9-48c5-ba36-a8ff6921bf07}] Until: Durable: Comment: Lock: UUIDName:} {Op:delete Table:Load_Balancer Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2c790ce1-a33c-4f51-9824-b25b0b77e391}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.241442 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:wait Table:Load_Balancer Row:map[] Rows:[map[name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14]] Columns:[name] Mutations:[] Timeout:0xc000bcbe20 Where:[where column name == Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14] Until:!= Durable: Comment: Lock: UUIDName:} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} vips:{GoMap:map[10.43.0.10:53: 10.43.0.10:9154:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996237} {Op:insert Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} vips:{GoMap:map[10.43.0.10:53:]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996238} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996237} {GoUUID:u2596996238}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:load_balancer Mutator:insert Value:{GoSet:[{GoUUID:u2596996237} {GoUUID:u2596996238}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:delete Table:Load_Balancer Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {69bf140c-11c9-48c5-ba36-a8ff6921bf07}] Until: Durable: Comment: Lock: UUIDName:} {Op:delete Table:Load_Balancer Row:map[] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {2c790ce1-a33c-4f51-9824-b25b0b77e391}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.251118 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="3e45c4ed-cfbf-4f21-9775-e08690e1b49b" I1115 05:39:27.251191 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="3e45c4ed-cfbf-4f21-9775-e08690e1b49b" "model"="&{UUID:3e45c4ed-cfbf-4f21-9775-e08690e1b49b ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes] HealthCheck:[] IPPortMappings:map[] Name:Service_default/kubernetes_TCP_node_router_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc000c02fb0 SelectionFields:[] Vips:map[10.43.0.1:443:169.254.169.2:6443]}" I1115 05:39:27.251209 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.251265 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "new"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000c03210 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[3e45c4ed-cfbf-4f21-9775-e08690e1b49b] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9 273fffd0-48ca-49c6-92b3-e3d72b863200]}" "old"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000c03270 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9 273fffd0-48ca-49c6-92b3-e3d72b863200]}" I1115 05:39:27.251463 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" I1115 05:39:27.251526 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch" "uuid"="55ba86b1-407f-4f90-86ba-a2378c8d6ccc" "new"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 d3771994-4315-4514-9d8d-dbdf2a0232c0 fd289ee5-55f1-454a-a4ab-7b4cda27a8f2] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace c690cf3a-b770-41dc-97e7-f0d8a5d9708e] QOSRules:[]}" "old"="&{UUID:55ba86b1-407f-4f90-86ba-a2378c8d6ccc ACLs:[4d8ad76c-5917-4379-aaa9-fc4d3fbb25cd] Copp: DNSRecords:[] ExternalIDs:map[] ForwardingGroups:[] LoadBalancer:[c4d96c7e-1529-457a-9c00-dbe41d077136 2c790ce1-a33c-4f51-9824-b25b0b77e391 69bf140c-11c9-48c5-ba36-a8ff6921bf07] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:release-ci-ci-op-k5cwk1pv-7cb14 OtherConfig:map[mcast_eth_src:0a:58:0a:2a:00:01 mcast_ip4_src:10.42.0.1 mcast_querier:true mcast_snoop:true subnet:10.42.0.0/24] Ports:[16e74789-d810-4e2a-86f4-f17eb9166ace c690cf3a-b770-41dc-97e7-f0d8a5d9708e] QOSRules:[]}" I1115 05:39:27.251545 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" I1115 05:39:27.251592 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Router" "uuid"="5c5170fd-1297-4d21-9aac-501b448f04c1" "new"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000a989c0 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[3e45c4ed-cfbf-4f21-9775-e08690e1b49b d3771994-4315-4514-9d8d-dbdf2a0232c0 fd289ee5-55f1-454a-a4ab-7b4cda27a8f2] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9 273fffd0-48ca-49c6-92b3-e3d72b863200]}" "old"="&{UUID:5c5170fd-1297-4d21-9aac-501b448f04c1 Copp:0xc000a98a70 Enabled: ExternalIDs:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2] LoadBalancer:[3e45c4ed-cfbf-4f21-9775-e08690e1b49b] LoadBalancerGroup:[7b81a844-05a7-4d75-90db-fc377eeda1a5] Name:GR_release-ci-ci-op-k5cwk1pv-7cb14 Nat:[] Options:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0] Policies:[] Ports:[ad17b3d7-02ae-4d31-a20c-ac73ecf7280f 5b6e2b30-c89d-433b-9fbe-7761cab3f2f1] StaticRoutes:[d6521a95-c7a3-4e9b-b5b0-5f28a72008a9 273fffd0-48ca-49c6-92b3-e3d72b863200]}" I1115 05:39:27.251607 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="2c790ce1-a33c-4f51-9824-b25b0b77e391" I1115 05:39:27.251629 63045 cache.go:1054] cache "msg"="deleting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="2c790ce1-a33c-4f51-9824-b25b0b77e391" "model"="&{UUID:2c790ce1-a33c-4f51-9824-b25b0b77e391 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-dns/dns-default_TCP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc000a98f10 SelectionFields:[] Vips:map[10.43.0.10:53: 10.43.0.10:9154:]}" I1115 05:39:27.251644 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="69bf140c-11c9-48c5-ba36-a8ff6921bf07" I1115 05:39:27.251670 63045 cache.go:1054] cache "msg"="deleting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="69bf140c-11c9-48c5-ba36-a8ff6921bf07" "model"="&{UUID:69bf140c-11c9-48c5-ba36-a8ff6921bf07 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-dns/dns-default_UDP_node_switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc000a99170 SelectionFields:[] Vips:map[10.43.0.10:53:]}" I1115 05:39:27.251683 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="d3771994-4315-4514-9d8d-dbdf2a0232c0" I1115 05:39:27.251707 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="d3771994-4315-4514-9d8d-dbdf2a0232c0" "model"="&{UUID:d3771994-4315-4514-9d8d-dbdf2a0232c0 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc000a993b0 SelectionFields:[] Vips:map[10.43.0.10:53:]}" I1115 05:39:27.251721 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="fd289ee5-55f1-454a-a4ab-7b4cda27a8f2" I1115 05:39:27.251745 63045 cache.go:1019] cache "msg"="inserting row" "database"="OVN_Northbound" "table"="Load_Balancer" "uuid"="fd289ee5-55f1-454a-a4ab-7b4cda27a8f2" "model"="&{UUID:fd289ee5-55f1-454a-a4ab-7b4cda27a8f2 ExternalIDs:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default] HealthCheck:[] IPPortMappings:map[] Name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 Options:map[event:false reject:true skip_snat:false] Protocol:0xc000a99610 SelectionFields:[] Vips:map[10.43.0.10:53: 10.43.0.10:9154:]}" I1115 05:39:27.251797 63045 loadbalancer.go:205] Deleted 2 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"openshift-dns/dns-default"} I1115 05:39:27.251816 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 11.584743ms I1115 05:39:27.251844 63045 loadbalancer.go:205] Deleted 0 stale LBs for map[string]string{"k8s.ovn.org/kind":"Service", "k8s.ovn.org/owner":"default/kubernetes"} I1115 05:39:27.251854 63045 services_controller.go:245] Finished syncing service kubernetes on namespace default : 12.422288ms I1115 05:39:27.255462 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="055c383e-9a3c-44b9-bfeb-2ce5e04fe0db" I1115 05:39:27.255583 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="055c383e-9a3c-44b9-bfeb-2ce5e04fe0db" "new"="&{UUID:055c383e-9a3c-44b9-bfeb-2ce5e04fe0db Chassis:0xc000a99af0 Datapath:565a72a5-31f1-429b-8168-c81f18c77757 Encap: ExternalIDs:map[gateway-physical-ip:yes] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[42:01:0a:00:00:02 10.0.0.2/32] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000a88020 VirtualParent:}" "old"="&{UUID:055c383e-9a3c-44b9-bfeb-2ce5e04fe0db Chassis: Datapath:565a72a5-31f1-429b-8168-c81f18c77757 Encap: ExternalIDs:map[gateway-physical-ip:yes] GatewayChassis:[] HaChassisGroup: LogicalPort:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[42:01:0a:00:00:02 10.0.0.2/32] NatAddresses:[] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000a88030 VirtualParent:}" I1115 05:39:27.255609 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="692037f9-888f-43d5-8307-38a50e947f76" I1115 05:39:27.255656 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="692037f9-888f-43d5-8307-38a50e947f76" "new"="&{UUID:692037f9-888f-43d5-8307-38a50e947f76 Chassis:0xc000a99ec0 Datapath:6509bf50-18c5-4f84-915a-4d0c42f51967 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[42:01:0a:00:00:02] NatAddresses:[42:01:0a:00:00:02 10.0.0.2] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000a88430 VirtualParent:}" "old"="&{UUID:692037f9-888f-43d5-8307-38a50e947f76 Chassis: Datapath:6509bf50-18c5-4f84-915a-4d0c42f51967 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 MAC:[42:01:0a:00:00:02] NatAddresses:[42:01:0a:00:00:02 10.0.0.2] Options:map[l3gateway-chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 peer:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type:l3gateway Up:0xc000a88440 VirtualParent:}" I1115 05:39:27.272687 63045 ovs.go:203] Exec(24): stdout: "" I1115 05:39:27.272708 63045 ovs.go:204] Exec(24): stderr: "" I1115 05:39:27.273527 63045 ovs.go:203] Exec(25): stdout: "NXST_AGGREGATE reply (xid=0x4): packet_count=0 byte_count=0 flow_count=484\n" I1115 05:39:27.273544 63045 ovs.go:204] Exec(25): stderr: "" I1115 05:39:27.273571 63045 ovs.go:200] Exec(26): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_release-ci-ci-op-k5cwk1pv-7cb14-to-br-int ofport I1115 05:39:27.283153 63045 ovs.go:203] Exec(26): stdout: "2\n" I1115 05:39:27.283175 63045 ovs.go:204] Exec(26): stderr: "" I1115 05:39:27.283183 63045 gateway.go:287] Gateway is ready I1115 05:39:27.283191 63045 gateway_localnet.go:83] Creating Local Gateway Openflow Manager I1115 05:39:27.283206 63045 ovs.go:200] Exec(27): /usr/bin/ovs-vsctl --timeout=15 get Interface patch-br-ex_release-ci-ci-op-k5cwk1pv-7cb14-to-br-int ofport I1115 05:39:27.292797 63045 ovs.go:203] Exec(27): stdout: "2\n" I1115 05:39:27.292818 63045 ovs.go:204] Exec(27): stderr: "" I1115 05:39:27.292831 63045 ovs.go:200] Exec(28): /usr/bin/ovs-vsctl --timeout=15 get interface eth0 ofport I1115 05:39:27.302203 63045 ovs.go:203] Exec(28): stdout: "1\n" I1115 05:39:27.302224 63045 ovs.go:204] Exec(28): stderr: "" I1115 05:39:27.302381 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: 127.0.0.1 I1115 05:39:27.302396 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: 10.42.0.2 I1115 05:39:27.302404 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: ::1 I1115 05:39:27.302411 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: fe80::c8b8:1f81:4b0c:7b33 I1115 05:39:27.302420 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: fe80::5017:adff:fee6:117d I1115 05:39:27.302424 63045 node_ip_handler_linux.go:245] Node address annotation being set to: map[10.0.0.2:{}] addrChanged: true I1115 05:39:27.302452 63045 kube.go:97] Setting annotations map[k8s.ovn.org/host-addresses:["10.0.0.2"]] on node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:27.318535 63045 gateway_shared_intf.go:1655] Setting OVN Masquerade route with source: 10.0.0.2 I1115 05:39:27.318664 63045 ovs.go:200] Exec(29): /usr/sbin/ip route replace table 7 10.43.0.0/16 via 10.42.0.1 dev ovn-k8s-mp0 I1115 05:39:27.320589 63045 obj_retry.go:1429] Update event received for resource *v1.Node, old object is equal to new: false I1115 05:39:27.320610 63045 obj_retry.go:1472] Update event received for *v1.Node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:27.320719 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:27.320876 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:361a0ade-a33a-40af-b4f6-dc5a6910be61}]} external_ids:{GoMap:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2]} load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.320916 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:361a0ade-a33a-40af-b4f6-dc5a6910be61}]} external_ids:{GoMap:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2]} load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.321011 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router Row:map[copp:{GoSet:[{GoUUID:361a0ade-a33a-40af-b4f6-dc5a6910be61}]} external_ids:{GoMap:map[physical_ip:10.0.0.2 physical_ips:10.0.0.2]} load_balancer_group:{GoSet:[{GoUUID:7b81a844-05a7-4d75-90db-fc377eeda1a5}]} options:{GoMap:map[always_learn_from_arp_request:false chassis:77436c83-1258-484f-b8d8-ec91acb3c8f3 dynamic_neigh_routers:true lb_force_snat_ip:router_ip snat-ct-zone:0]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.321099 63045 ovs.go:203] Exec(29): stdout: "" I1115 05:39:27.321109 63045 ovs.go:204] Exec(29): stderr: "" I1115 05:39:27.321115 63045 gateway_shared_intf.go:1356] Successfully added route into custom routing table: 7 I1115 05:39:27.321123 63045 ovs.go:200] Exec(30): /usr/sbin/ip -4 rule I1115 05:39:27.322650 63045 ovs.go:203] Exec(30): stdout: "0:\tfrom all lookup local\n32766:\tfrom all lookup main\n32767:\tfrom all lookup default\n" I1115 05:39:27.322673 63045 ovs.go:204] Exec(30): stderr: "" I1115 05:39:27.322682 63045 ovs.go:200] Exec(31): /usr/sbin/ip -4 rule add fwmark 0x1745ec lookup 7 prio 30 I1115 05:39:27.324059 63045 ovs.go:203] Exec(31): stdout: "" I1115 05:39:27.324076 63045 ovs.go:204] Exec(31): stderr: "" I1115 05:39:27.324086 63045 ovs.go:200] Exec(32): /usr/sbin/sysctl -w net.ipv4.conf.ovn-k8s-mp0.rp_filter=2 I1115 05:39:27.326153 63045 ovs.go:203] Exec(32): stdout: "net.ipv4.conf.ovn-k8s-mp0.rp_filter = 2\n" I1115 05:39:27.326170 63045 ovs.go:204] Exec(32): stderr: "" I1115 05:39:27.326180 63045 ovs.go:200] Exec(33): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface patch-br-ex_release-ci-ci-op-k5cwk1pv-7cb14-to-br-int ofport I1115 05:39:27.338181 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8cf571f0-00a4-41ca-97ae-ef814907077a}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.338251 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8cf571f0-00a4-41ca-97ae-ef814907077a}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.338267 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8cf571f0-00a4-41ca-97ae-ef814907077a}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8cf571f0-00a4-41ca-97ae-ef814907077a}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.338369 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[router]} options:{GoMap:map[router-port:rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {8cf571f0-00a4-41ca-97ae-ef814907077a}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:8cf571f0-00a4-41ca-97ae-ef814907077a}]}}] Timeout: Where:[where column _uuid == {4a7292ab-0326-4438-83d1-4c6f4765fce0}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.338746 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad17b3d7-02ae-4d31-a20c-ac73ecf7280f}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.338793 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad17b3d7-02ae-4d31-a20c-ac73ecf7280f}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.338831 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad17b3d7-02ae-4d31-a20c-ac73ecf7280f}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad17b3d7-02ae-4d31-a20c-ac73ecf7280f}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.338885 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Port Row:map[mac:0a:58:64:40:00:02 networks:{GoSet:[100.64.0.2/16]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {ad17b3d7-02ae-4d31-a20c-ac73ecf7280f}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:ad17b3d7-02ae-4d31-a20c-ac73ecf7280f}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.339116 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6521a95-c7a3-4e9b-b5b0-5f28a72008a9}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339158 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:d6521a95-c7a3-4e9b-b5b0-5f28a72008a9}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339177 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6521a95-c7a3-4e9b-b5b0-5f28a72008a9}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:d6521a95-c7a3-4e9b-b5b0-5f28a72008a9}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339218 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:100.64.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d6521a95-c7a3-4e9b-b5b0-5f28a72008a9}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:d6521a95-c7a3-4e9b-b5b0-5f28a72008a9}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.339440 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:42:01:0a:00:00:02 networks:{GoSet:[10.0.0.2/32]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b6e2b30-c89d-433b-9fbe-7761cab3f2f1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339483 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5b6e2b30-c89d-433b-9fbe-7761cab3f2f1}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339507 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:42:01:0a:00:00:02 networks:{GoSet:[10.0.0.2/32]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b6e2b30-c89d-433b-9fbe-7761cab3f2f1}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5b6e2b30-c89d-433b-9fbe-7761cab3f2f1}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339557 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Port Row:map[external_ids:{GoMap:map[gateway-physical-ip:yes]} mac:42:01:0a:00:00:02 networks:{GoSet:[10.0.0.2/32]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5b6e2b30-c89d-433b-9fbe-7761cab3f2f1}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:5b6e2b30-c89d-433b-9fbe-7761cab3f2f1}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.339827 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[0]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d93174fc-1e0a-4f3a-bb24-e656a84b1816}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339933 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[42:01:0a:00:00:02]} options:{GoMap:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e78d83a5-a48f-440a-bed3-69268ccf9f56}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339970 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d93174fc-1e0a-4f3a-bb24-e656a84b1816} {GoUUID:e78d83a5-a48f-440a-bed3-69268ccf9f56}]}}] Timeout: Where:[where column _uuid == {6c38102e-0494-490b-8261-cf3b14dff19d}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.339985 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[0]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d93174fc-1e0a-4f3a-bb24-e656a84b1816}] Until: Durable: Comment: Lock: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[42:01:0a:00:00:02]} options:{GoMap:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e78d83a5-a48f-440a-bed3-69268ccf9f56}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d93174fc-1e0a-4f3a-bb24-e656a84b1816} {GoUUID:e78d83a5-a48f-440a-bed3-69268ccf9f56}]}}] Timeout: Where:[where column _uuid == {6c38102e-0494-490b-8261-cf3b14dff19d}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340059 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[unknown]} options:{GoMap:map[network_name:physnet]} port_security:{GoSet:[]} tag_request:{GoSet:[0]} type:localnet] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d93174fc-1e0a-4f3a-bb24-e656a84b1816}] Until: Durable: Comment: Lock: UUIDName:} {Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[42:01:0a:00:00:02]} options:{GoMap:map[router-port:rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[]} tag_request:{GoSet:[]} type:router] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {e78d83a5-a48f-440a-bed3-69268ccf9f56}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:d93174fc-1e0a-4f3a-bb24-e656a84b1816} {GoUUID:e78d83a5-a48f-440a-bed3-69268ccf9f56}]}}] Timeout: Where:[where column _uuid == {6c38102e-0494-490b-8261-cf3b14dff19d}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.340327 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.0.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {273fffd0-48ca-49c6-92b3-e3d72b863200}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340366 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:273fffd0-48ca-49c6-92b3-e3d72b863200}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340382 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.0.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {273fffd0-48ca-49c6-92b3-e3d72b863200}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:273fffd0-48ca-49c6-92b3-e3d72b863200}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340421 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Static_Route Row:map[nexthop:10.0.0.1] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {273fffd0-48ca-49c6-92b3-e3d72b863200}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:273fffd0-48ca-49c6-92b3-e3d72b863200}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.340624 63045 model_client.go:354] Update operations generated as: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:100.64.0.2 nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5f57312a-3e27-4593-a961-66666188af7b}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340662 63045 model_client.go:370] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:5f57312a-3e27-4593-a961-66666188af7b}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340676 63045 transact.go:41] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:100.64.0.2 nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5f57312a-3e27-4593-a961-66666188af7b}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:5f57312a-3e27-4593-a961-66666188af7b}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}] I1115 05:39:27.340718 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:100.64.0.2 nexthop:100.64.0.2] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {5f57312a-3e27-4593-a961-66666188af7b}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:5f57312a-3e27-4593-a961-66666188af7b}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:27.344134 63045 master.go:1448] When adding node release-ci-ci-op-k5cwk1pv-7cb14, found 3 pods to add to retryPods I1115 05:39:27.344157 63045 master.go:1454] Adding pod openshift-dns/node-resolver-jhcw4 to retryPods I1115 05:39:27.344171 63045 master.go:1454] Adding pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 to retryPods I1115 05:39:27.344180 63045 master.go:1454] Adding pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 to retryPods I1115 05:39:27.344185 63045 obj_retry.go:195] Iterate retry objects requested (resource *v1.Pod) I1115 05:39:27.344199 63045 obj_retry.go:1219] Retry channel got triggered: retrying failed objects of type *v1.Pod I1115 05:39:27.344204 63045 obj_retry.go:1194] Going to retry *v1.Pod resource setup for 3 number of resources: [openshift-dns/node-resolver-jhcw4 openshift-ovn-kubernetes/ovnkube-master-kdsb7 openshift-ovn-kubernetes/ovnkube-node-b5wd2] I1115 05:39:27.344213 63045 obj_retry.go:1203] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources I1115 05:39:27.344228 63045 obj_retry.go:1106] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 I1115 05:39:27.344240 63045 obj_retry.go:1157] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 I1115 05:39:27.344247 63045 obj_retry.go:1174] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-b5wd2 after 0 failed attempt(s) I1115 05:39:27.344253 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.344261 63045 obj_retry.go:1106] Retry object setup: *v1.Pod openshift-dns/node-resolver-jhcw4 I1115 05:39:27.344269 63045 obj_retry.go:1157] Adding new object: *v1.Pod openshift-dns/node-resolver-jhcw4 I1115 05:39:27.344275 63045 obj_retry.go:1174] Retry successful for *v1.Pod openshift-dns/node-resolver-jhcw4 after 0 failed attempt(s) I1115 05:39:27.344278 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.344285 63045 obj_retry.go:1106] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:27.344292 63045 obj_retry.go:1157] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:27.344298 63045 obj_retry.go:1174] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 after 0 failed attempt(s) I1115 05:39:27.344301 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.344306 63045 obj_retry.go:1205] Function iterateRetryResources ended (in 102.542µs) I1115 05:39:27.344344 63045 ovs.go:203] Exec(33): stdout: "2\n" I1115 05:39:27.344355 63045 ovs.go:204] Exec(33): stderr: "" I1115 05:39:27.344364 63045 ovs.go:200] Exec(34): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface eth0 ofport I1115 05:39:27.354166 63045 ovs.go:203] Exec(34): stdout: "1\n" I1115 05:39:27.354190 63045 ovs.go:204] Exec(34): stderr: "" I1115 05:39:27.367761 63045 obj_retry.go:1429] Update event received for resource *v1.Pod, old object is equal to new: false I1115 05:39:27.367783 63045 obj_retry.go:502] Recording update event on pod I1115 05:39:27.367796 63045 obj_retry.go:1472] Update event received for *v1.Pod openshift-ovn-kubernetes/ovnkube-master-kdsb7 I1115 05:39:27.367804 63045 obj_retry.go:530] Recording success event on pod I1115 05:39:27.380026 63045 gateway_iptables.go:67] Adding rule in table: mangle, chain: OUTPUT with args: "-j OVN-KUBE-ITP" for protocol: 0 I1115 05:39:27.383935 63045 gateway_iptables.go:70] Chain: "OUTPUT" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.391907 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: OUTPUT with args: "-j OVN-KUBE-ITP" for protocol: 0 I1115 05:39:27.396585 63045 gateway_iptables.go:70] Chain: "OUTPUT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.404792 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: POSTROUTING with args: "-j OVN-KUBE-EGRESS-SVC" for protocol: 0 I1115 05:39:27.408890 63045 gateway_iptables.go:70] Chain: "POSTROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N POSTROUTING --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.417325 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: PREROUTING with args: "-j OVN-KUBE-NODEPORT" for protocol: 0 I1115 05:39:27.421454 63045 gateway_iptables.go:70] Chain: "PREROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N PREROUTING --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.429705 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: OUTPUT with args: "-j OVN-KUBE-NODEPORT" for protocol: 0 I1115 05:39:27.433771 63045 gateway_iptables.go:70] Chain: "OUTPUT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.449877 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="c690cf3a-b770-41dc-97e7-f0d8a5d9708e" I1115 05:39:27.449952 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Northbound" "table"="Logical_Switch_Port" "uuid"="c690cf3a-b770-41dc-97e7-f0d8a5d9708e" "new"="&{UUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e Addresses:[52:17:ad:e6:11:7d 10.42.0.2] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:k8s-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[] ParentName: PortSecurity:[] Tag: TagRequest: Type: Up:0xc000914db0}" "old"="&{UUID:c690cf3a-b770-41dc-97e7-f0d8a5d9708e Addresses:[52:17:ad:e6:11:7d 10.42.0.2] Dhcpv4Options: Dhcpv6Options: DynamicAddresses: Enabled: ExternalIDs:map[] HaChassisGroup: Name:k8s-release-ci-ci-op-k5cwk1pv-7cb14 Options:map[] ParentName: PortSecurity:[] Tag: TagRequest: Type: Up:0xc000914dc0}" I1115 05:39:27.450038 63045 cache.go:999] cache "msg"="processing update" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="ca39f1f4-b2d6-421f-9354-db71d4e96db6" I1115 05:39:27.450101 63045 cache.go:1040] cache "msg"="updated row" "database"="OVN_Southbound" "table"="Port_Binding" "uuid"="ca39f1f4-b2d6-421f-9354-db71d4e96db6" "new"="&{UUID:ca39f1f4-b2d6-421f-9354-db71d4e96db6 Chassis:0xc0001f9440 Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:k8s-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[52:17:ad:e6:11:7d 10.42.0.2] NatAddresses:[] Options:map[] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type: Up:0xc000914ff8 VirtualParent:}" "old"="&{UUID:ca39f1f4-b2d6-421f-9354-db71d4e96db6 Chassis:0xc0001f9460 Datapath:03caccf0-c9e5-40b2-b775-78ddd6e1ae32 Encap: ExternalIDs:map[] GatewayChassis:[] HaChassisGroup: LogicalPort:k8s-release-ci-ci-op-k5cwk1pv-7cb14 MAC:[52:17:ad:e6:11:7d 10.42.0.2] NatAddresses:[] Options:map[] ParentPort: RequestedChassis: Tag: TunnelKey:2 Type: Up:0xc000915008 VirtualParent:}" I1115 05:39:27.450152 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: PREROUTING with args: "-j OVN-KUBE-EXTERNALIP" for protocol: 0 I1115 05:39:27.454502 63045 gateway_iptables.go:70] Chain: "PREROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N PREROUTING --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.464763 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: OUTPUT with args: "-j OVN-KUBE-EXTERNALIP" for protocol: 0 I1115 05:39:27.469018 63045 gateway_iptables.go:70] Chain: "OUTPUT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OUTPUT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.477439 63045 gateway_iptables.go:67] Adding rule in table: nat, chain: PREROUTING with args: "-j OVN-KUBE-ETP" for protocol: 0 I1115 05:39:27.482256 63045 gateway_iptables.go:70] Chain: "PREROUTING" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N PREROUTING --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.495580 63045 gateway_iptables.go:346] Chain: "OVN-KUBE-ITP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.499762 63045 gateway_iptables.go:346] Chain: "OVN-KUBE-ITP" in table: "mangle" already exists, skipping creation: running [/usr/sbin/iptables -t mangle -N OVN-KUBE-ITP --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.503890 63045 gateway_iptables.go:346] Chain: "OVN-KUBE-EGRESS-SVC" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EGRESS-SVC --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.507969 63045 gateway_iptables.go:346] Chain: "OVN-KUBE-NODEPORT" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-NODEPORT --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.512056 63045 gateway_iptables.go:346] Chain: "OVN-KUBE-EXTERNALIP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-EXTERNALIP --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.516166 63045 gateway_iptables.go:346] Chain: "OVN-KUBE-ETP" in table: "nat" already exists, skipping creation: running [/usr/sbin/iptables -t nat -N OVN-KUBE-ETP --wait]: exit status 1: iptables: Chain already exists. I1115 05:39:27.516196 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-j OVN-KUBE-ITP" for protocol: 0 I1115 05:39:27.520292 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-j OVN-KUBE-EGRESS-SVC" for protocol: 0 I1115 05:39:27.524544 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-j OVN-KUBE-NODEPORT" for protocol: 0 I1115 05:39:27.528713 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-j OVN-KUBE-EXTERNALIP" for protocol: 0 I1115 05:39:27.532882 63045 gateway_iptables.go:87] Deleting rule in table: filter, chain: FORWARD with args: "-j OVN-KUBE-ETP" for protocol: 0 I1115 05:39:27.537047 63045 gateway_shared_intf.go:1729] Ensuring IP Neighbor entry for: 169.254.169.1 W1115 05:39:27.537131 63045 gateway_shared_intf.go:1735] Failed to remove IP neighbor entry for ip 169.254.169.1, on iface br-ex: failed to delete neighbour entry 169.254.169.1 : no such file or directory I1115 05:39:27.537166 63045 gateway_shared_intf.go:1729] Ensuring IP Neighbor entry for: 169.254.169.4 W1115 05:39:27.537221 63045 gateway_shared_intf.go:1735] Failed to remove IP neighbor entry for ip 169.254.169.4, on iface br-ex: failed to delete neighbour entry 169.254.169.4 : no such file or directory I1115 05:39:27.537291 63045 healthcheck.go:362] Gateway OpenFlow sync requested I1115 05:39:27.595707 63045 gateway_shared_intf.go:503] Adding service router-internal-default in namespace openshift-ingress I1115 05:39:27.595745 63045 gateway_shared_intf.go:520] Rules already programmed for router-internal-default in namespace openshift-ingress I1115 05:39:27.595751 63045 gateway_shared_intf.go:503] Adding service dns-default in namespace openshift-dns I1115 05:39:27.595759 63045 gateway_shared_intf.go:520] Rules already programmed for dns-default in namespace openshift-dns I1115 05:39:27.595762 63045 gateway_shared_intf.go:503] Adding service kubernetes in namespace default I1115 05:39:27.595777 63045 gateway_shared_intf.go:520] Rules already programmed for kubernetes in namespace default I1115 05:39:27.595785 63045 factory.go:546] Added *v1.Service event handler 5 I1115 05:39:27.595804 63045 gateway_shared_intf.go:704] Adding endpointslice router-internal-default-vjpkx in namespace openshift-ingress I1115 05:39:27.595811 63045 gateway_shared_intf.go:704] Adding endpointslice dns-default-jxtwl in namespace openshift-dns I1115 05:39:27.595816 63045 gateway_shared_intf.go:704] Adding endpointslice kubernetes in namespace default I1115 05:39:27.595822 63045 factory.go:546] Added *v1.EndpointSlice event handler 6 I1115 05:39:27.773144 63045 ovs.go:200] Exec(35): /usr/bin/ovs-vsctl --timeout=15 --if-exists get interface ovn-k8s-mp0 ofport I1115 05:39:27.783312 63045 ovs.go:203] Exec(35): stdout: "1\n" I1115 05:39:27.783331 63045 ovs.go:204] Exec(35): stderr: "" I1115 05:39:27.783345 63045 ovs.go:200] Exec(36): /usr/bin/ovs-ofctl --no-stats --no-names dump-flows br-int table=65,out_port=1 I1115 05:39:27.792363 63045 ovs.go:203] Exec(36): stdout: " cookie=0xca39f1f4, table=65, priority=100,reg15=0x2,metadata=0x1 actions=output:1\n" I1115 05:39:27.792382 63045 ovs.go:204] Exec(36): stderr: "" I1115 05:39:27.792391 63045 management-port.go:130] Management port is ready I1115 05:39:27.792406 63045 node_ip_handler_linux.go:163] Node IP manager is running I1115 05:39:27.792411 63045 gateway.go:193] Spawning Conntrack Rule Check Thread I1115 05:39:27.792422 63045 node.go:438] Gateway and management port readiness took 690.245332ms I1115 05:39:27.792786 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: 127.0.0.1 I1115 05:39:27.792797 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: 169.254.169.2 I1115 05:39:27.792803 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: 10.42.0.2 I1115 05:39:27.792810 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: ::1 I1115 05:39:27.792817 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: fe80::c8b8:1f81:4b0c:7b33 I1115 05:39:27.792823 63045 node_ip_handler_linux.go:237] Skipping non-useable IP address for host: fe80::5017:adff:fee6:117d I1115 05:39:27.793130 63045 ovs.go:200] Exec(37): /usr/bin/ovs-ofctl -O OpenFlow13 --bundle replace-flows br-ex - I1115 05:39:27.801525 63045 node_upgrade.go:74] Detected cluster topology version 5 from ConfigMap openshift-ovn-kubernetes/control-plane-status I1115 05:39:27.801545 63045 node.go:450] Current control-plane topology version is 5 I1115 05:39:27.801767 63045 ovs.go:200] Exec(38): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-br br-ext I1115 05:39:27.822115 63045 ovs.go:203] Exec(38): stdout: "" I1115 05:39:27.822134 63045 ovs.go:204] Exec(38): stderr: "" I1115 05:39:27.822146 63045 ovs.go:200] Exec(39): /usr/bin/ovs-vsctl --timeout=15 --if-exists del-port br-int int I1115 05:39:27.836084 63045 ovs.go:203] Exec(39): stdout: "" I1115 05:39:27.836102 63045 ovs.go:204] Exec(39): stderr: "" I1115 05:39:27.850719 63045 node.go:598] Egress IP health check server skipped: no port specified I1115 05:39:27.850738 63045 node.go:591] OVN Kube Node initialized and ready. W1115 05:39:27.864644 63045 management-port_linux.go:310] missing management port nat rule in chain OVN-KUBE-SNAT-MGMTPORT, adding it I1115 05:39:33.257478 63045 egress_services_node.go:169] Processing sync for Egress Service node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:33.257767 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[node:release-ci-ci-op-k5cwk1pv-7cb14]} match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {49b06b1c-7370-4cfc-8440-1876c45a4898}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:49b06b1c-7370-4cfc-8440-1876c45a4898}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:33.258420 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:33.258481 63045 egress_services_node.go:172] Finished syncing Egress Service node release-ci-ci-op-k5cwk1pv-7cb14: 1.012248ms I1115 05:39:33.263658 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:33.267711 63045 pods.go:439] [openshift-service-ca/service-ca-77fc4cc659-dp8dn] creating logical port for pod on switch release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:33.267937 63045 kube.go:71] Setting annotations map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.42.0.3/24"],"mac_address":"0a:58:0a:2a:00:03","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.3/24","gateway_ip":"10.42.0.1"}}] on pod openshift-service-ca/service-ca-77fc4cc659-dp8dn I1115 05:39:33.269757 63045 pods.go:439] [openshift-ingress/router-default-76b7657c68-6xcfc] creating logical port for pod on switch release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:33.269919 63045 kube.go:71] Setting annotations map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.42.0.4/24"],"mac_address":"0a:58:0a:2a:00:04","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.4/24","gateway_ip":"10.42.0.1"}}] on pod openshift-ingress/router-default-76b7657c68-6xcfc I1115 05:39:33.281788 63045 obj_retry.go:1415] Creating *v1.Pod openshift-storage/topolvm-node-2tnh5 took: 1.501µs I1115 05:39:33.289800 63045 obj_retry.go:1415] Creating *v1.Pod openshift-dns/dns-default-tw2xt took: 1.35µs I1115 05:39:33.290051 63045 pods.go:439] [openshift-storage/topolvm-node-2tnh5] creating logical port for pod on switch release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:33.290220 63045 kube.go:71] Setting annotations map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.42.0.5/24"],"mac_address":"0a:58:0a:2a:00:05","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.5/24","gateway_ip":"10.42.0.1"}}] on pod openshift-storage/topolvm-node-2tnh5 I1115 05:39:33.300213 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.4]}}] Timeout: Where:[where column _uuid == {c8d0d3b5-0ba2-4746-9b87-69e77f04047c}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:NAT Row:map[external_ip:10.0.0.2 logical_ip:10.42.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996239} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:u2596996239}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:2a:00:04 10.42.0.4]} external_ids:{GoMap:map[namespace:openshift-ingress pod:true]} name:openshift-ingress_router-default-76b7657c68-6xcfc options:{GoMap:map[iface-id-ver:868ba04c-b1ea-438a-a89c-8e90befa7a1d requested-chassis:release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[0a:58:0a:2a:00:04 10.42.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996240} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996240}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:33.300629 63045 pods.go:439] [openshift-dns/dns-default-tw2xt] creating logical port for pod on switch release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:33.300727 63045 kube.go:71] Setting annotations map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.42.0.6/24"],"mac_address":"0a:58:0a:2a:00:06","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.6/24","gateway_ip":"10.42.0.1"}}] on pod openshift-dns/dns-default-tw2xt I1115 05:39:33.304969 63045 pods.go:428] [openshift-ingress/router-default-76b7657c68-6xcfc] addLogicalPort took 35.214811ms, libovsdb time 4.827753ms, annotation time: 30.037678ms I1115 05:39:33.305013 63045 pods.go:439] [openshift-storage/topolvm-controller-8456864f89-vg42d] creating logical port for pod on switch release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:33.305079 63045 kube.go:71] Setting annotations map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.42.0.7/24"],"mac_address":"0a:58:0a:2a:00:07","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.7/24","gateway_ip":"10.42.0.1"}}] on pod openshift-storage/topolvm-controller-8456864f89-vg42d I1115 05:39:33.311946 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.5]}}] Timeout: Where:[where column _uuid == {28858321-dd27-4de7-853e-70d96eeed103}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:NAT Row:map[external_ip:10.0.0.2 logical_ip:10.42.0.5 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996241} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:u2596996241}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:2a:00:05 10.42.0.5]} external_ids:{GoMap:map[namespace:openshift-storage pod:true]} name:openshift-storage_topolvm-node-2tnh5 options:{GoMap:map[iface-id-ver:30861194-030d-40d4-86be-44594d858fac requested-chassis:release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[0a:58:0a:2a:00:05 10.42.0.5]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996242} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996242}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:33.312529 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.6]}}] Timeout: Where:[where column _uuid == {31da521f-e3bb-4921-b342-bda903443133}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:NAT Row:map[external_ip:10.0.0.2 logical_ip:10.42.0.6 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996243} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:u2596996243}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:2a:00:06 10.42.0.6]} external_ids:{GoMap:map[namespace:openshift-dns pod:true]} name:openshift-dns_dns-default-tw2xt options:{GoMap:map[iface-id-ver:b4bede0d-71c2-40dd-9ebe-3395e3ddf85e requested-chassis:release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[0a:58:0a:2a:00:06 10.42.0.6]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996244} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996244}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:33.317857 63045 pods.go:428] [openshift-dns/dns-default-tw2xt] addLogicalPort took 17.225011ms, libovsdb time 5.438198ms, annotation time: 11.479285ms I1115 05:39:33.318396 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.7]}}] Timeout: Where:[where column _uuid == {28858321-dd27-4de7-853e-70d96eeed103}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:NAT Row:map[external_ip:10.0.0.2 logical_ip:10.42.0.7 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996245} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:u2596996245}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:2a:00:07 10.42.0.7]} external_ids:{GoMap:map[namespace:openshift-storage pod:true]} name:openshift-storage_topolvm-controller-8456864f89-vg42d options:{GoMap:map[iface-id-ver:9756b5e3-88df-4742-a05c-c5bbceab89ca requested-chassis:release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[0a:58:0a:2a:00:07 10.42.0.7]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996246} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996246}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:33.318597 63045 pods.go:428] [openshift-storage/topolvm-node-2tnh5] addLogicalPort took 28.549258ms, libovsdb time 6.751097ms, annotation time: 21.380024ms I1115 05:39:33.319113 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.42.0.3]}}] Timeout: Where:[where column _uuid == {5a6731e4-5f9c-469d-a8a1-e8af11024a3d}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:NAT Row:map[external_ip:10.0.0.2 logical_ip:10.42.0.3 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996247} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:u2596996247}]}}] Timeout: Where:[where column _uuid == {5c5170fd-1297-4d21-9aac-501b448f04c1}] Until: Durable: Comment: Lock: UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:2a:00:03 10.42.0.3]} external_ids:{GoMap:map[namespace:openshift-service-ca pod:true]} name:openshift-service-ca_service-ca-77fc4cc659-dp8dn options:{GoMap:map[iface-id-ver:cf8cee7d-02cc-44f5-9e8d-0ff4621360aa requested-chassis:release-ci-ci-op-k5cwk1pv-7cb14]} port_security:{GoSet:[0a:58:0a:2a:00:03 10.42.0.3]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[] Until: Durable: Comment: Lock: UUIDName:u2596996248} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2596996248}]}}] Timeout: Where:[where column _uuid == {55ba86b1-407f-4f90-86ba-a2378c8d6ccc}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:33.324509 63045 pods.go:428] [openshift-service-ca/service-ca-77fc4cc659-dp8dn] addLogicalPort took 56.783819ms, libovsdb time 5.449031ms, annotation time: 50.899386ms I1115 05:39:33.324597 63045 pods.go:428] [openshift-storage/topolvm-controller-8456864f89-vg42d] addLogicalPort took 19.58553ms, libovsdb time 6.300345ms, annotation time: 13.00039ms I1115 05:39:33.764510 63045 cni.go:227] [openshift-service-ca/service-ca-77fc4cc659-dp8dn 214ced3f12f00ad82bb53efe4a65b0da0d78545eaf48502735b338c7711fa41b] ADD starting CNI request [openshift-service-ca/service-ca-77fc4cc659-dp8dn 214ced3f12f00ad82bb53efe4a65b0da0d78545eaf48502735b338c7711fa41b] I1115 05:39:33.781150 63045 helper_linux.go:334] ConfigureOVS: namespace: openshift-service-ca, podName: service-ca-77fc4cc659-dp8dn, SandboxID: "214ced3f12f00ad82bb53efe4a65b0da0d78545eaf48502735b338c7711fa41b", UID: "cf8cee7d-02cc-44f5-9e8d-0ff4621360aa", MAC: 0a:58:0a:2a:00:03, IPs: [10.42.0.3/24] I1115 05:39:33.818692 63045 cni.go:227] [openshift-storage/topolvm-controller-8456864f89-vg42d 15f5fa52af1dc67ee17719329f6cbf19dafd2b43fcdd97b9b0e0d7ade102d9ce] ADD starting CNI request [openshift-storage/topolvm-controller-8456864f89-vg42d 15f5fa52af1dc67ee17719329f6cbf19dafd2b43fcdd97b9b0e0d7ade102d9ce] I1115 05:39:33.821047 63045 cni.go:227] [openshift-storage/topolvm-node-2tnh5 870b223f98ce64dfd8003f12ac079cf5cfd40077eed0f4d62e696092b8e624a3] ADD starting CNI request [openshift-storage/topolvm-node-2tnh5 870b223f98ce64dfd8003f12ac079cf5cfd40077eed0f4d62e696092b8e624a3] I1115 05:39:33.837346 63045 helper_linux.go:334] ConfigureOVS: namespace: openshift-storage, podName: topolvm-node-2tnh5, SandboxID: "870b223f98ce64dfd8003f12ac079cf5cfd40077eed0f4d62e696092b8e624a3", UID: "30861194-030d-40d4-86be-44594d858fac", MAC: 0a:58:0a:2a:00:05, IPs: [10.42.0.5/24] I1115 05:39:33.868485 63045 helper_linux.go:334] ConfigureOVS: namespace: openshift-storage, podName: topolvm-controller-8456864f89-vg42d, SandboxID: "15f5fa52af1dc67ee17719329f6cbf19dafd2b43fcdd97b9b0e0d7ade102d9ce", UID: "9756b5e3-88df-4742-a05c-c5bbceab89ca", MAC: 0a:58:0a:2a:00:07, IPs: [10.42.0.7/24] I1115 05:39:34.184725 63045 cni.go:248] [openshift-service-ca/service-ca-77fc4cc659-dp8dn 214ced3f12f00ad82bb53efe4a65b0da0d78545eaf48502735b338c7711fa41b] ADD finished CNI request [openshift-service-ca/service-ca-77fc4cc659-dp8dn 214ced3f12f00ad82bb53efe4a65b0da0d78545eaf48502735b338c7711fa41b], result "{\"interfaces\":[{\"name\":\"214ced3f12f00ad\",\"mac\":\"c2:df:b0:f7:be:7c\"},{\"name\":\"eth0\",\"mac\":\"0a:58:0a:2a:00:03\",\"sandbox\":\"/var/run/netns/ae26c312-94a4-41d7-a012-fa56dbd4ab44\"}],\"ips\":[{\"interface\":1,\"address\":\"10.42.0.3/24\",\"gateway\":\"10.42.0.1\"}],\"dns\":{}}", err I1115 05:39:34.299124 63045 cni.go:248] [openshift-storage/topolvm-node-2tnh5 870b223f98ce64dfd8003f12ac079cf5cfd40077eed0f4d62e696092b8e624a3] ADD finished CNI request [openshift-storage/topolvm-node-2tnh5 870b223f98ce64dfd8003f12ac079cf5cfd40077eed0f4d62e696092b8e624a3], result "{\"interfaces\":[{\"name\":\"870b223f98ce64d\",\"mac\":\"62:34:69:a9:16:ab\"},{\"name\":\"eth0\",\"mac\":\"0a:58:0a:2a:00:05\",\"sandbox\":\"/var/run/netns/c13ab9c0-670d-41ed-a00a-a51976c4ac09\"}],\"ips\":[{\"interface\":1,\"address\":\"10.42.0.5/24\",\"gateway\":\"10.42.0.1\"}],\"dns\":{}}", err I1115 05:39:34.301369 63045 cni.go:248] [openshift-storage/topolvm-controller-8456864f89-vg42d 15f5fa52af1dc67ee17719329f6cbf19dafd2b43fcdd97b9b0e0d7ade102d9ce] ADD finished CNI request [openshift-storage/topolvm-controller-8456864f89-vg42d 15f5fa52af1dc67ee17719329f6cbf19dafd2b43fcdd97b9b0e0d7ade102d9ce], result "{\"interfaces\":[{\"name\":\"15f5fa52af1dc67\",\"mac\":\"de:c9:0a:a5:69:5b\"},{\"name\":\"eth0\",\"mac\":\"0a:58:0a:2a:00:07\",\"sandbox\":\"/var/run/netns/5d8efb6e-433c-4782-b5b3-f0a06b6a3e16\"}],\"ips\":[{\"interface\":1,\"address\":\"10.42.0.7/24\",\"gateway\":\"10.42.0.1\"}],\"dns\":{}}", err I1115 05:39:35.404458 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:38.633103 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:38.633150 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:38.633157 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:38.633165 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:38.633170 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:38.633173 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:38.633177 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 0 I1115 05:39:38.633205 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:38.633227 63045 services_controller.go:314] Skipping no-op change for service openshift-dns/dns-default I1115 05:39:38.633231 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 133.844µs I1115 05:39:41.368149 63045 cni.go:227] [openshift-ingress/router-default-76b7657c68-6xcfc 1c94b1eee8cf5a6b1d6c458775495e7841aac6a7e3e36d2009b0228c99ea228f] ADD starting CNI request [openshift-ingress/router-default-76b7657c68-6xcfc 1c94b1eee8cf5a6b1d6c458775495e7841aac6a7e3e36d2009b0228c99ea228f] I1115 05:39:41.379658 63045 helper_linux.go:334] ConfigureOVS: namespace: openshift-ingress, podName: router-default-76b7657c68-6xcfc, SandboxID: "1c94b1eee8cf5a6b1d6c458775495e7841aac6a7e3e36d2009b0228c99ea228f", UID: "868ba04c-b1ea-438a-a89c-8e90befa7a1d", MAC: 0a:58:0a:2a:00:04, IPs: [10.42.0.4/24] I1115 05:39:41.462241 63045 cni.go:227] [openshift-dns/dns-default-tw2xt ca98647713499726095982a8c56bc82f60a9aec1ff629ef26556c09df05020d4] ADD starting CNI request [openshift-dns/dns-default-tw2xt ca98647713499726095982a8c56bc82f60a9aec1ff629ef26556c09df05020d4] I1115 05:39:41.471782 63045 helper_linux.go:334] ConfigureOVS: namespace: openshift-dns, podName: dns-default-tw2xt, SandboxID: "ca98647713499726095982a8c56bc82f60a9aec1ff629ef26556c09df05020d4", UID: "b4bede0d-71c2-40dd-9ebe-3395e3ddf85e", MAC: 0a:58:0a:2a:00:06, IPs: [10.42.0.6/24] I1115 05:39:41.655476 63045 cni.go:248] [openshift-dns/dns-default-tw2xt ca98647713499726095982a8c56bc82f60a9aec1ff629ef26556c09df05020d4] ADD finished CNI request [openshift-dns/dns-default-tw2xt ca98647713499726095982a8c56bc82f60a9aec1ff629ef26556c09df05020d4], result "{\"interfaces\":[{\"name\":\"ca9864771349972\",\"mac\":\"9a:7f:a5:74:0c:06\"},{\"name\":\"eth0\",\"mac\":\"0a:58:0a:2a:00:06\",\"sandbox\":\"/var/run/netns/5c5f3dee-370f-42dd-8374-b7b9da991ff2\"}],\"ips\":[{\"interface\":1,\"address\":\"10.42.0.6/24\",\"gateway\":\"10.42.0.1\"}],\"dns\":{}}", err I1115 05:39:41.775841 63045 cni.go:248] [openshift-ingress/router-default-76b7657c68-6xcfc 1c94b1eee8cf5a6b1d6c458775495e7841aac6a7e3e36d2009b0228c99ea228f] ADD finished CNI request [openshift-ingress/router-default-76b7657c68-6xcfc 1c94b1eee8cf5a6b1d6c458775495e7841aac6a7e3e36d2009b0228c99ea228f], result "{\"interfaces\":[{\"name\":\"1c94b1eee8cf5a6\",\"mac\":\"96:cd:4b:13:49:95\"},{\"name\":\"eth0\",\"mac\":\"0a:58:0a:2a:00:04\",\"sandbox\":\"/var/run/netns/65410a39-d5e1-407f-a877-3edec02ce8b3\"}],\"ips\":[{\"interface\":1,\"address\":\"10.42.0.4/24\",\"gateway\":\"10.42.0.1\"}],\"dns\":{}}", err I1115 05:39:44.273470 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:46.946657 63045 services_controller.go:241] Processing sync for service openshift-ingress/router-internal-default I1115 05:39:46.946701 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:46.946709 63045 kube.go:326] Slice endpoints Not Ready I1115 05:39:46.946715 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 80 I1115 05:39:46.946722 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:46.946728 63045 kube.go:326] Slice endpoints Not Ready I1115 05:39:46.946732 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 443 I1115 05:39:46.946737 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:46.946742 63045 kube.go:326] Slice endpoints Not Ready I1115 05:39:46.946747 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [] / [] on port: 1936 I1115 05:39:46.946770 63045 services_controller.go:305] Service openshift-ingress/router-internal-default has 3 cluster-wide and 0 per-node configs, making 1 and 0 load balancers I1115 05:39:46.946792 63045 services_controller.go:314] Skipping no-op change for service openshift-ingress/router-internal-default I1115 05:39:46.946802 63045 services_controller.go:245] Finished syncing service router-internal-default on namespace openshift-ingress : 150.988µs I1115 05:39:47.955314 63045 services_controller.go:241] Processing sync for service openshift-ingress/router-internal-default I1115 05:39:47.955351 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:47.955363 63045 kube.go:330] Adding slice router-internal-default-vjpkx endpoints: [10.42.0.4], port: 80 I1115 05:39:47.955377 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [10.42.0.4] / [] on port: 80 I1115 05:39:47.955389 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:47.955400 63045 kube.go:330] Adding slice router-internal-default-vjpkx endpoints: [10.42.0.4], port: 443 I1115 05:39:47.955409 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [10.42.0.4] / [] on port: 443 I1115 05:39:47.955417 63045 kube.go:303] Getting endpoints for slice openshift-ingress/router-internal-default-vjpkx I1115 05:39:47.955427 63045 kube.go:330] Adding slice router-internal-default-vjpkx endpoints: [10.42.0.4], port: 1936 I1115 05:39:47.955437 63045 kube.go:346] LB Endpoints for openshift-ingress/router-internal-default are: [10.42.0.4] / [] on port: 1936 I1115 05:39:47.955470 63045 services_controller.go:305] Service openshift-ingress/router-internal-default has 3 cluster-wide and 0 per-node configs, making 1 and 0 load balancers I1115 05:39:47.955845 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress/router-internal-default]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.73.144:1936:10.42.0.4:1936 10.43.73.144:443:10.42.0.4:443 10.43.73.144:80:10.42.0.4:80]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {b32a33dc-269e-4adc-a189-4e21c2044d70}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:47.960688 63045 services_controller.go:245] Finished syncing service router-internal-default on namespace openshift-ingress : 5.378246ms I1115 05:39:48.664804 63045 egress_services_node.go:169] Processing sync for Egress Service node release-ci-ci-op-k5cwk1pv-7cb14 I1115 05:39:48.665077 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Logical_Router_Policy Row:map[action:allow external_ids:{GoMap:map[node:release-ci-ci-op-k5cwk1pv-7cb14]} match:ip4.src == 10.42.0.0/16 && ip4.dst == 10.0.0.2/32 priority:102] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {49b06b1c-7370-4cfc-8440-1876c45a4898}] Until: Durable: Comment: Lock: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:49b06b1c-7370-4cfc-8440-1876c45a4898}]}}] Timeout: Where:[where column _uuid == {0b12a74c-0c2a-4e8c-aa81-8e9b5f98d5f5}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:48.665456 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:39:48.665779 63045 egress_services_node.go:172] Finished syncing Egress Service node release-ci-ci-op-k5cwk1pv-7cb14: 981.516µs I1115 05:39:49.948625 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:49.948658 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:49.948666 63045 kube.go:326] Slice endpoints Not Ready I1115 05:39:49.948672 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 5353 I1115 05:39:49.948679 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:49.948684 63045 kube.go:326] Slice endpoints Not Ready I1115 05:39:49.948689 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 5353 I1115 05:39:49.948693 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:49.948699 63045 kube.go:326] Slice endpoints Not Ready I1115 05:39:49.948703 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [] / [] on port: 9154 I1115 05:39:49.948733 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:49.948760 63045 services_controller.go:314] Skipping no-op change for service openshift-dns/dns-default I1115 05:39:49.948766 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 147.6µs I1115 05:39:56.441613 63045 services_controller.go:241] Processing sync for service openshift-dns/dns-default I1115 05:39:56.441667 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:56.441677 63045 kube.go:330] Adding slice dns-default-jxtwl endpoints: [10.42.0.6], port: 5353 I1115 05:39:56.441689 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [10.42.0.6] / [] on port: 5353 I1115 05:39:56.441697 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:56.441703 63045 kube.go:330] Adding slice dns-default-jxtwl endpoints: [10.42.0.6], port: 5353 I1115 05:39:56.441709 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [10.42.0.6] / [] on port: 5353 I1115 05:39:56.441713 63045 kube.go:303] Getting endpoints for slice openshift-dns/dns-default-jxtwl I1115 05:39:56.441718 63045 kube.go:330] Adding slice dns-default-jxtwl endpoints: [10.42.0.6], port: 9154 I1115 05:39:56.441723 63045 kube.go:346] LB Endpoints for openshift-dns/dns-default are: [10.42.0.6] / [] on port: 9154 I1115 05:39:56.441757 63045 services_controller.go:305] Service openshift-dns/dns-default has 0 cluster-wide and 3 per-node configs, making 0 and 2 load balancers I1115 05:39:56.442070 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_TCP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.10:53:10.42.0.6:5353 10.43.0.10:9154:10.42.0.6:9154]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {fd289ee5-55f1-454a-a4ab-7b4cda27a8f2}] Until: Durable: Comment: Lock: UUIDName:} {Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns/dns-default]} name:Service_openshift-dns/dns-default_UDP_node_router+switch_release-ci-ci-op-k5cwk1pv-7cb14 options:{GoMap:map[event:false reject:true skip_snat:false]} protocol:{GoSet:[udp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.43.0.10:53:10.42.0.6:5353]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {d3771994-4315-4514-9d8d-dbdf2a0232c0}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:39:56.445349 63045 services_controller.go:245] Finished syncing service dns-default on namespace openshift-dns : 3.728967ms I1115 05:39:56.615765 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490796 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:40:03.855248 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:40:26.616139 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490826 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:40:56.616212 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490856 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:41:26.615404 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490886 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:41:56.616155 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490916 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:42:26.615561 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490946 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:42:56.615912 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668490976 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:43:26.615926 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491006 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:43:56.615990 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491036 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:44:26.615916 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491066 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:44:45.609872 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 0 items received I1115 05:44:56.615916 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491096 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:45:09.782316 63045 master.go:1364] Adding or Updating Node "release-ci-ci-op-k5cwk1pv-7cb14" I1115 05:45:12.609939 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 11 items received I1115 05:45:26.615930 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491126 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:45:42.665061 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 9 items received I1115 05:45:50.665344 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 11 items received I1115 05:45:56.615702 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491156 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:46:26.615614 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491186 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:46:33.609451 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 9 items received I1115 05:46:37.608386 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 0 items received I1115 05:46:56.615781 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491216 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:47:26.615706 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491246 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:47:52.609668 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 42 items received I1115 05:47:56.615965 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491276 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" I1115 05:48:22.665384 63045 reflector.go:536] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 18 items received I1115 05:48:26.616239 63045 client.go:783] "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1668491306 mac_prefix:da:45:83 max_tunid:16711680 northd_internal_version:22.06.1-20.23.0-63.4 northd_probe_interval:5000 svc_monitor_mac:ae:de:d6:8a:ee:d3 use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout: Where:[where column _uuid == {975338d6-246c-4dca-9a4f-b865d4f805e9}] Until: Durable: Comment: Lock: UUIDName:}]" Error from server (BadRequest): previous terminated container "ovnkube-master" in pod "ovnkube-master-kdsb7" not found + true + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-ovn-kubernetes pod/ovnkube-node-b5wd2 Name: ovnkube-node-b5wd2 Namespace: openshift-ovn-kubernetes Priority: 2000001000 Priority Class Name: system-node-critical Service Account: ovn-kubernetes-node Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:11 +0000 Labels: app=ovnkube-node component=network controller-revision-hash=5ff7f6464d kubernetes.io/os=linux openshift.io/component=network pod-template-generation=1 type=infra Annotations: target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.0.0.2 IPs: IP: 10.0.0.2 Controlled By: DaemonSet/ovnkube-node Containers: ovn-controller: Container ID: cri-o://774cb06ba4bc637a16073261ea2e73af8e20c1f658dd8b022fb3a385e1f15dc9 Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77 Port: Host Port: Command: /bin/bash -c set -e if [[ -f "/env/${K8S_NODE}" ]]; then set -o allexport source "/env/${K8S_NODE}" set +o allexport fi # K8S_NODE_IP triggers reconcilation of this daemon when node IP changes echo "$(date -Iseconds) - starting ovn-controller, Node: ${K8S_NODE} IP: ${K8S_NODE_IP}" exec ovn-controller unix:/var/run/openvswitch/db.sock -vfile:off \ --no-chdir --pidfile=/var/run/ovn/ovn-controller.pid \ --syslog-method="null" \ --log-file=/var/log/ovn/acl-audit-log.log \ -vFACILITY:"local0" \ -vconsole:"${OVN_LOG_LEVEL}" -vconsole:"acl_log:off" \ -vPATTERN:console:"%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m" \ -vsyslog:"acl_log:info" \ -vfile:"acl_log:info" State: Running Started: Tue, 15 Nov 2022 05:39:25 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 10Mi Environment: OVN_LOG_LEVEL: info K8S_NODE: (v1:spec.nodeName) K8S_NODE_IP: (v1:status.hostIP) Mounts: /dev/log from log-socket (rw) /env from env-overrides (rw) /etc/openvswitch from etc-openvswitch (rw) /etc/ovn/ from etc-openvswitch (rw) /run/openvswitch from run-openvswitch (rw) /run/ovn/ from run-ovn (rw) /var/lib/openvswitch from var-lib-openvswitch (rw) /var/log/ovn from node-log (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpm6z (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: var-lib-openvswitch: Type: HostPath (bare host directory volume) Path: /var/lib/openvswitch/data HostPathType: etc-openvswitch: Type: HostPath (bare host directory volume) Path: /etc/openvswitch HostPathType: run-openvswitch: Type: HostPath (bare host directory volume) Path: /var/run/openvswitch HostPathType: run-ovn: Type: HostPath (bare host directory volume) Path: /var/run/ovn HostPathType: node-log: Type: HostPath (bare host directory volume) Path: /var/log/ovn HostPathType: log-socket: Type: HostPath (bare host directory volume) Path: /dev/log HostPathType: env-overrides: Type: ConfigMap (a volume populated by a ConfigMap) Name: env-overrides Optional: true kube-api-access-rpm6z: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m31s default-scheduler Successfully assigned openshift-ovn-kubernetes/ovnkube-node-b5wd2 to release-ci-ci-op-k5cwk1pv-7cb14 Normal Pulling 9m24s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" Normal Pulled 9m17s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd9365d7ab0a70fd0d67937853bed13eaece6f49895aed34d7eca038f5e0aa77" in 6.620824362s Normal Created 9m17s kubelet Created container ovn-controller Normal Started 9m17s kubelet Started container ovn-controller ++ kubectl get -n openshift-ovn-kubernetes pod/ovnkube-node-b5wd2 -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-ovn-kubernetes pod/ovnkube-node-b5wd2 ovn-controller 2022-11-15T05:39:25+00:00 - starting ovn-controller, Node: release-ci-ci-op-k5cwk1pv-7cb14 IP: 10.0.0.2 2022-11-15T05:39:25Z|00001|vlog|INFO|opened log file /var/log/ovn/acl-audit-log.log 2022-11-15T05:39:25.755Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2022-11-15T05:39:25.755Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2022-11-15T05:39:25.757Z|00004|main|INFO|OVN internal version is : [22.06.1-20.23.0-63.4] 2022-11-15T05:39:25.757Z|00005|main|INFO|OVS IDL reconnected, force recompute. 2022-11-15T05:39:25.757Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. 2022-11-15T05:39:26.695Z|00007|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connecting... 2022-11-15T05:39:26.695Z|00008|reconnect|INFO|unix:/var/run/ovn/ovnsb_db.sock: connected 2022-11-15T05:39:26.705Z|00009|chassis|INFO|Need to specify an encap type and ip 2022-11-15T05:39:26.705Z|00010|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2022-11-15T05:39:26.705Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2022-11-15T05:39:26.705Z|00012|features|INFO|OVS Feature: ct_zero_snat, state: supported 2022-11-15T05:39:26.705Z|00013|main|INFO|OVS feature set changed, force recompute. 2022-11-15T05:39:26.705Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2022-11-15T05:39:26.705Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2022-11-15T05:39:26.706Z|00016|chassis|INFO|Need to specify an encap type and ip 2022-11-15T05:39:26.706Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2022-11-15T05:39:26.706Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2022-11-15T05:39:26.706Z|00019|chassis|INFO|Need to specify an encap type and ip 2022-11-15T05:39:26.707Z|00020|chassis|INFO|Need to specify an encap type and ip 2022-11-15T05:39:26.708Z|00021|main|INFO|OVS feature set changed, force recompute. 2022-11-15T05:39:26.708Z|00022|chassis|INFO|Need to specify an encap type and ip 2022-11-15T05:39:26.718Z|00001|pinctrl(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch 2022-11-15T05:39:26.718Z|00002|rconn(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... 2022-11-15T05:39:26.720Z|00003|rconn(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected 2022-11-15T05:39:27.142Z|00023|binding|INFO|Claiming lport cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14 for this chassis. 2022-11-15T05:39:27.142Z|00024|binding|INFO|cr-rtos-release-ci-ci-op-k5cwk1pv-7cb14: Claiming 0a:58:0a:2a:00:01 10.42.0.1/24 2022-11-15T05:39:27.164Z|00025|binding|INFO|Claiming lport k8s-release-ci-ci-op-k5cwk1pv-7cb14 for this chassis. 2022-11-15T05:39:27.164Z|00026|binding|INFO|k8s-release-ci-ci-op-k5cwk1pv-7cb14: Claiming 52:17:ad:e6:11:7d 10.42.0.2 2022-11-15T05:39:27.215Z|00027|binding|INFO|Claiming lport jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14 for this chassis. 2022-11-15T05:39:27.215Z|00028|binding|INFO|jtor-GR_release-ci-ci-op-k5cwk1pv-7cb14: Claiming router 2022-11-15T05:39:27.215Z|00029|binding|INFO|Claiming lport rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14 for this chassis. 2022-11-15T05:39:27.215Z|00030|binding|INFO|rtoj-GR_release-ci-ci-op-k5cwk1pv-7cb14: Claiming 0a:58:64:40:00:02 100.64.0.2/16 2022-11-15T05:39:27.230Z|00031|binding|INFO|Claiming lport etor-GR_release-ci-ci-op-k5cwk1pv-7cb14 for this chassis. 2022-11-15T05:39:27.230Z|00032|binding|INFO|etor-GR_release-ci-ci-op-k5cwk1pv-7cb14: Claiming 42:01:0a:00:00:02 2022-11-15T05:39:27.230Z|00033|binding|INFO|Claiming lport rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14 for this chassis. 2022-11-15T05:39:27.230Z|00034|binding|INFO|rtoe-GR_release-ci-ci-op-k5cwk1pv-7cb14: Claiming 42:01:0a:00:00:02 10.0.0.2/32 2022-11-15T05:39:27.443Z|00035|binding|INFO|Setting lport k8s-release-ci-ci-op-k5cwk1pv-7cb14 ovn-installed in OVS 2022-11-15T05:39:27.443Z|00036|binding|INFO|Setting lport k8s-release-ci-ci-op-k5cwk1pv-7cb14 up in Southbound 2022-11-15T05:39:33.928Z|00037|binding|INFO|Claiming lport openshift-service-ca_service-ca-77fc4cc659-dp8dn for this chassis. 2022-11-15T05:39:33.928Z|00038|binding|INFO|openshift-service-ca_service-ca-77fc4cc659-dp8dn: Claiming 0a:58:0a:2a:00:03 10.42.0.3 2022-11-15T05:39:33.977Z|00039|binding|INFO|Setting lport openshift-service-ca_service-ca-77fc4cc659-dp8dn ovn-installed in OVS 2022-11-15T05:39:33.977Z|00040|binding|INFO|Setting lport openshift-service-ca_service-ca-77fc4cc659-dp8dn up in Southbound 2022-11-15T05:39:33.998Z|00041|binding|INFO|Claiming lport openshift-storage_topolvm-controller-8456864f89-vg42d for this chassis. 2022-11-15T05:39:33.998Z|00042|binding|INFO|openshift-storage_topolvm-controller-8456864f89-vg42d: Claiming 0a:58:0a:2a:00:07 10.42.0.7 2022-11-15T05:39:33.998Z|00043|binding|INFO|Claiming lport openshift-storage_topolvm-node-2tnh5 for this chassis. 2022-11-15T05:39:33.998Z|00044|binding|INFO|openshift-storage_topolvm-node-2tnh5: Claiming 0a:58:0a:2a:00:05 10.42.0.5 2022-11-15T05:39:34.089Z|00045|binding|INFO|Setting lport openshift-storage_topolvm-controller-8456864f89-vg42d ovn-installed in OVS 2022-11-15T05:39:34.089Z|00046|binding|INFO|Setting lport openshift-storage_topolvm-controller-8456864f89-vg42d up in Southbound 2022-11-15T05:39:34.089Z|00047|binding|INFO|Setting lport openshift-storage_topolvm-node-2tnh5 ovn-installed in OVS 2022-11-15T05:39:34.089Z|00048|binding|INFO|Setting lport openshift-storage_topolvm-node-2tnh5 up in Southbound 2022-11-15T05:39:41.434Z|00049|memory|INFO|13864 kB peak resident set size after 15.7 seconds 2022-11-15T05:39:41.434Z|00050|memory|INFO|idl-cells-OVN_Southbound:4667 idl-cells-Open_vSwitch:578 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:210 ofctrl_installed_flow_usage-KB:154 ofctrl_sb_flow_ref_usage-KB:87 2022-11-15T05:39:41.446Z|00051|binding|INFO|Claiming lport openshift-ingress_router-default-76b7657c68-6xcfc for this chassis. 2022-11-15T05:39:41.446Z|00052|binding|INFO|openshift-ingress_router-default-76b7657c68-6xcfc: Claiming 0a:58:0a:2a:00:04 10.42.0.4 2022-11-15T05:39:41.567Z|00053|binding|INFO|Setting lport openshift-ingress_router-default-76b7657c68-6xcfc ovn-installed in OVS 2022-11-15T05:39:41.567Z|00054|binding|INFO|Setting lport openshift-ingress_router-default-76b7657c68-6xcfc up in Southbound 2022-11-15T05:39:41.589Z|00055|binding|INFO|Claiming lport openshift-dns_dns-default-tw2xt for this chassis. 2022-11-15T05:39:41.589Z|00056|binding|INFO|openshift-dns_dns-default-tw2xt: Claiming 0a:58:0a:2a:00:06 10.42.0.6 2022-11-15T05:39:41.617Z|00057|binding|INFO|Setting lport openshift-dns_dns-default-tw2xt ovn-installed in OVS 2022-11-15T05:39:41.617Z|00058|binding|INFO|Setting lport openshift-dns_dns-default-tw2xt up in Southbound 2022-11-15T05:39:56.718Z|00059|lflow_cache|INFO|Detected cache inactivity (last active 30001 ms ago): trimming cache + kubectl logs --previous=true -n openshift-ovn-kubernetes pod/ovnkube-node-b5wd2 ovn-controller Error from server (BadRequest): previous terminated container "ovn-controller" in pod "ovnkube-node-b5wd2" not found + true + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-route-controller-manager -o name + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-service-ca -o name + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-service-ca pod/service-ca-77fc4cc659-dp8dn Name: service-ca-77fc4cc659-dp8dn Namespace: openshift-service-ca Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: service-ca Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:33 +0000 Labels: app=service-ca pod-template-hash=77fc4cc659 service-ca=true Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.3/24"],"mac_address":"0a:58:0a:2a:00:03","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.3/24","gat... openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} Status: Running IP: 10.42.0.3 IPs: IP: 10.42.0.3 Controlled By: ReplicaSet/service-ca-77fc4cc659 Containers: service-ca-controller: Container ID: cri-o://50ac092ec2bc547d58015508d68066100c00695e649613d9e69c84a0d3c1128a Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7 Image ID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7 Port: 8443/TCP Host Port: 0/TCP Command: service-ca-operator controller Args: -v=2 State: Running Started: Tue, 15 Nov 2022 05:39:37 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 120Mi Environment: Mounts: /var/run/configmaps/signing-cabundle from signing-cabundle (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxdl4 (ro) /var/run/secrets/signing-key from signing-key (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: signing-key: Type: Secret (a volume populated by a Secret) SecretName: signing-key Optional: false signing-cabundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: signing-cabundle Optional: false kube-api-access-hxdl4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: node-role.kubernetes.io/master= Tolerations: node-role.kubernetes.io/master:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 120s node.kubernetes.io/unreachable:NoExecute op=Exists for 120s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 9m32s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. Normal Scheduled 9m9s default-scheduler Successfully assigned openshift-service-ca/service-ca-77fc4cc659-dp8dn to release-ci-ci-op-k5cwk1pv-7cb14 Normal Pulling 9m9s kubelet Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7" Normal Pulled 9m6s kubelet Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2741fb3a1088349c089868bb57bbd1d0416e4425f4e8f95b62df9bdc267a19d7" in 3.004804839s Normal Created 9m6s kubelet Created container service-ca-controller Normal Started 9m6s kubelet Started container service-ca-controller ++ kubectl get -n openshift-service-ca pod/service-ca-77fc4cc659-dp8dn -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-service-ca pod/service-ca-77fc4cc659-dp8dn service-ca-controller W1115 05:39:37.595945 1 cmd.go:213] Using insecure, self-signed certificates I1115 05:39:37.596213 1 crypto.go:601] Generating new CA for service-ca-controller-signer@1668490777 cert, and key in /tmp/serving-cert-1718420467/serving-signer.crt, /tmp/serving-cert-1718420467/serving-signer.key I1115 05:39:38.047057 1 observer_polling.go:159] Starting file observer I1115 05:39:38.059048 1 builder.go:262] service-ca-controller version v4.12.0-202211081106.p0.g299b709.assembly.stream-0-g7526fa4- I1115 05:39:38.059898 1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="serving-cert::/tmp/serving-cert-1718420467/tls.crt::/tmp/serving-cert-1718420467/tls.key" I1115 05:39:38.353653 1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController W1115 05:39:38.358089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected. W1115 05:39:38.358117 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected. I1115 05:39:38.358348 1 genericapiserver.go:412] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete W1115 05:39:38.359253 1 builder.go:321] unable to get cluster infrastructure status, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) I1115 05:39:38.359668 1 leaderelection.go:248] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... I1115 05:39:38.361583 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ClusterInfrastructureStatus' unable to get cluster infrastructure status, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) I1115 05:39:38.362356 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1718420467/tls.crt::/tmp/serving-cert-1718420467/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1668490777\" (2022-11-15 05:39:37 +0000 UTC to 2022-12-15 05:39:38 +0000 UTC (now=2022-11-15 05:39:38.362328372 +0000 UTC))" I1115 05:39:38.362488 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1668490778\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1668490778\" (2022-11-15 04:39:38 +0000 UTC to 2023-11-15 04:39:38 +0000 UTC (now=2022-11-15 05:39:38.362472041 +0000 UTC))" I1115 05:39:38.362524 1 secure_serving.go:210] Serving securely on [::]:8443 I1115 05:39:38.362542 1 genericapiserver.go:477] [graceful-termination] waiting for shutdown to be initiated I1115 05:39:38.362824 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I1115 05:39:38.362843 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController I1115 05:39:38.362864 1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/serving-cert-1718420467/tls.crt::/tmp/serving-cert-1718420467/tls.key" I1115 05:39:38.363220 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I1115 05:39:38.364138 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I1115 05:39:38.364163 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1115 05:39:38.364197 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" I1115 05:39:38.364210 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1115 05:39:38.370268 1 leaderelection.go:258] successfully acquired lease openshift-service-ca/service-ca-controller-lock I1115 05:39:38.371755 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertUpdateController I1115 05:39:38.371777 1 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"0eb090aa-66b4-45f6-9e93-052d4f207dcc", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-77fc4cc659-dp8dn_e2ac9889-b171-4edc-a0e0-51283b761592 became leader I1115 05:39:38.371788 1 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"openshift-service-ca", Name:"service-ca-controller-lock", UID:"b90c751e-2dec-4c20-bc06-6070ae4a992f", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' service-ca-77fc4cc659-dp8dn_e2ac9889-b171-4edc-a0e0-51283b761592 became leader I1115 05:39:38.372292 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "CRDCABundleInjector" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.373281 1 base_controller.go:67] Waiting for caches to sync for APIServiceCABundleInjector I1115 05:39:38.373308 1 base_controller.go:67] Waiting for caches to sync for ConfigMapCABundleInjector I1115 05:39:38.373319 1 base_controller.go:67] Waiting for caches to sync for CRDCABundleInjector I1115 05:39:38.373331 1 base_controller.go:67] Waiting for caches to sync for MutatingWebhookCABundleInjector I1115 05:39:38.373343 1 base_controller.go:67] Waiting for caches to sync for ValidatingWebhookCABundleInjector I1115 05:39:38.373353 1 base_controller.go:67] Waiting for caches to sync for LegacyVulnerableConfigMapCABundleInjector I1115 05:39:38.373882 1 base_controller.go:67] Waiting for caches to sync for ServiceServingCertController I1115 05:39:38.373908 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "APIServiceCABundleInjector" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.373925 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.374030 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "MutatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.374049 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ValidatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.374059 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "LegacyVulnerableConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.374083 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ServiceServingCertController" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.374096 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-service-ca", Name:"service-ca", UID:"a0969a2d-2d53-4ef2-94b2-b10be92bb19e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'FastControllerResync' Controller "ServiceServingCertUpdateController" resync interval is set to 0s which might lead to client request throttling I1115 05:39:38.464158 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController I1115 05:39:38.464281 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1115 05:39:38.464307 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1115 05:39:38.464441 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"aggregator-signer\" [] issuer=\"\" (2022-11-15 05:38:28 +0000 UTC to 2023-11-15 05:38:29 +0000 UTC (now=2022-11-15 05:39:38.464419177 +0000 UTC))" I1115 05:39:38.464587 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1718420467/tls.crt::/tmp/serving-cert-1718420467/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1668490777\" (2022-11-15 05:39:37 +0000 UTC to 2022-12-15 05:39:38 +0000 UTC (now=2022-11-15 05:39:38.464572414 +0000 UTC))" I1115 05:39:38.464678 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1668490778\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1668490778\" (2022-11-15 04:39:38 +0000 UTC to 2023-11-15 04:39:38 +0000 UTC (now=2022-11-15 05:39:38.464665386 +0000 UTC))" I1115 05:39:38.464882 1 tlsconfig.go:178] "Loaded client CA" index=0 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-control-plane-signer\" [] issuer=\"\" (2022-11-15 05:38:26 +0000 UTC to 2023-11-15 05:38:27 +0000 UTC (now=2022-11-15 05:39:38.46486877 +0000 UTC))" I1115 05:39:38.464908 1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-apiserver-to-kubelet-signer\" [] issuer=\"\" (2022-11-15 05:38:27 +0000 UTC to 2023-11-15 05:38:28 +0000 UTC (now=2022-11-15 05:39:38.464897506 +0000 UTC))" I1115 05:39:38.464925 1 tlsconfig.go:178] "Loaded client CA" index=2 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"admin-kubeconfig-signer\" [] issuer=\"\" (2022-11-15 05:38:27 +0000 UTC to 2032-11-12 05:38:28 +0000 UTC (now=2022-11-15 05:39:38.464914844 +0000 UTC))" I1115 05:39:38.464941 1 tlsconfig.go:178] "Loaded client CA" index=3 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kubelet-signer\" [] issuer=\"\" (2022-11-15 05:38:27 +0000 UTC to 2023-11-15 05:38:28 +0000 UTC (now=2022-11-15 05:39:38.464932022 +0000 UTC))" I1115 05:39:38.464961 1 tlsconfig.go:178] "Loaded client CA" index=4 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"kube-csr-signer\" [] issuer=\"kubelet-signer\" (2022-11-15 05:38:28 +0000 UTC to 2023-11-15 05:38:29 +0000 UTC (now=2022-11-15 05:39:38.464951249 +0000 UTC))" I1115 05:39:38.464978 1 tlsconfig.go:178] "Loaded client CA" index=5 certName="client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" certDetail="\"aggregator-signer\" [] issuer=\"\" (2022-11-15 05:38:28 +0000 UTC to 2023-11-15 05:38:29 +0000 UTC (now=2022-11-15 05:39:38.464967498 +0000 UTC))" I1115 05:39:38.465075 1 tlsconfig.go:200] "Loaded serving cert" certName="serving-cert::/tmp/serving-cert-1718420467/tls.crt::/tmp/serving-cert-1718420467/tls.key" certDetail="\"localhost\" [serving] validServingFor=[localhost] issuer=\"service-ca-controller-signer@1668490777\" (2022-11-15 05:39:37 +0000 UTC to 2022-12-15 05:39:38 +0000 UTC (now=2022-11-15 05:39:38.465063112 +0000 UTC))" I1115 05:39:38.465157 1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1668490778\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1668490778\" (2022-11-15 04:39:38 +0000 UTC to 2023-11-15 04:39:38 +0000 UTC (now=2022-11-15 05:39:38.465147266 +0000 UTC))" I1115 05:39:38.472319 1 base_controller.go:73] Caches are synced for ServiceServingCertUpdateController I1115 05:39:38.472344 1 base_controller.go:110] Starting #1 worker of ServiceServingCertUpdateController controller ... I1115 05:39:38.472351 1 base_controller.go:110] Starting #2 worker of ServiceServingCertUpdateController controller ... I1115 05:39:38.472362 1 base_controller.go:110] Starting #3 worker of ServiceServingCertUpdateController controller ... I1115 05:39:38.472366 1 base_controller.go:110] Starting #4 worker of ServiceServingCertUpdateController controller ... I1115 05:39:38.472371 1 base_controller.go:110] Starting #5 worker of ServiceServingCertUpdateController controller ... I1115 05:39:38.473507 1 base_controller.go:73] Caches are synced for LegacyVulnerableConfigMapCABundleInjector I1115 05:39:38.473531 1 base_controller.go:110] Starting #1 worker of LegacyVulnerableConfigMapCABundleInjector controller ... I1115 05:39:38.473545 1 base_controller.go:110] Starting #2 worker of LegacyVulnerableConfigMapCABundleInjector controller ... I1115 05:39:38.473551 1 base_controller.go:110] Starting #3 worker of LegacyVulnerableConfigMapCABundleInjector controller ... I1115 05:39:38.473558 1 base_controller.go:110] Starting #4 worker of LegacyVulnerableConfigMapCABundleInjector controller ... I1115 05:39:38.473564 1 base_controller.go:110] Starting #5 worker of LegacyVulnerableConfigMapCABundleInjector controller ... I1115 05:39:38.473612 1 base_controller.go:73] Caches are synced for APIServiceCABundleInjector I1115 05:39:38.473619 1 base_controller.go:110] Starting #1 worker of APIServiceCABundleInjector controller ... I1115 05:39:38.473623 1 base_controller.go:110] Starting #2 worker of APIServiceCABundleInjector controller ... I1115 05:39:38.473628 1 base_controller.go:110] Starting #3 worker of APIServiceCABundleInjector controller ... I1115 05:39:38.473632 1 base_controller.go:110] Starting #4 worker of APIServiceCABundleInjector controller ... I1115 05:39:38.473636 1 base_controller.go:110] Starting #5 worker of APIServiceCABundleInjector controller ... I1115 05:39:38.473648 1 base_controller.go:73] Caches are synced for ConfigMapCABundleInjector I1115 05:39:38.473652 1 base_controller.go:110] Starting #1 worker of ConfigMapCABundleInjector controller ... I1115 05:39:38.473656 1 base_controller.go:110] Starting #2 worker of ConfigMapCABundleInjector controller ... I1115 05:39:38.473662 1 base_controller.go:110] Starting #3 worker of ConfigMapCABundleInjector controller ... I1115 05:39:38.473677 1 base_controller.go:110] Starting #4 worker of ConfigMapCABundleInjector controller ... I1115 05:39:38.473681 1 base_controller.go:110] Starting #5 worker of ConfigMapCABundleInjector controller ... I1115 05:39:38.473704 1 configmap.go:107] updating configmap openshift-ingress/service-ca-bundle with the service signing CA bundle I1115 05:39:38.473930 1 base_controller.go:73] Caches are synced for ServiceServingCertController I1115 05:39:38.473947 1 base_controller.go:110] Starting #1 worker of ServiceServingCertController controller ... I1115 05:39:38.473956 1 base_controller.go:110] Starting #2 worker of ServiceServingCertController controller ... I1115 05:39:38.473965 1 base_controller.go:110] Starting #3 worker of ServiceServingCertController controller ... I1115 05:39:38.473971 1 base_controller.go:110] Starting #4 worker of ServiceServingCertController controller ... I1115 05:39:38.473979 1 base_controller.go:110] Starting #5 worker of ServiceServingCertController controller ... I1115 05:39:38.495672 1 base_controller.go:73] Caches are synced for CRDCABundleInjector I1115 05:39:38.495692 1 base_controller.go:110] Starting #1 worker of CRDCABundleInjector controller ... I1115 05:39:38.495711 1 base_controller.go:110] Starting #2 worker of CRDCABundleInjector controller ... I1115 05:39:38.495716 1 base_controller.go:110] Starting #3 worker of CRDCABundleInjector controller ... I1115 05:39:38.495721 1 base_controller.go:110] Starting #4 worker of CRDCABundleInjector controller ... I1115 05:39:38.495726 1 base_controller.go:110] Starting #5 worker of CRDCABundleInjector controller ... I1115 05:39:38.495751 1 base_controller.go:73] Caches are synced for MutatingWebhookCABundleInjector I1115 05:39:38.495756 1 base_controller.go:110] Starting #1 worker of MutatingWebhookCABundleInjector controller ... I1115 05:39:38.495760 1 base_controller.go:110] Starting #2 worker of MutatingWebhookCABundleInjector controller ... I1115 05:39:38.495765 1 base_controller.go:110] Starting #3 worker of MutatingWebhookCABundleInjector controller ... I1115 05:39:38.495769 1 base_controller.go:110] Starting #4 worker of MutatingWebhookCABundleInjector controller ... I1115 05:39:38.495775 1 base_controller.go:110] Starting #5 worker of MutatingWebhookCABundleInjector controller ... I1115 05:39:38.495788 1 base_controller.go:73] Caches are synced for ValidatingWebhookCABundleInjector I1115 05:39:38.495793 1 base_controller.go:110] Starting #1 worker of ValidatingWebhookCABundleInjector controller ... I1115 05:39:38.495807 1 base_controller.go:110] Starting #2 worker of ValidatingWebhookCABundleInjector controller ... I1115 05:39:38.495811 1 base_controller.go:110] Starting #3 worker of ValidatingWebhookCABundleInjector controller ... I1115 05:39:38.495815 1 base_controller.go:110] Starting #4 worker of ValidatingWebhookCABundleInjector controller ... I1115 05:39:38.495819 1 base_controller.go:110] Starting #5 worker of ValidatingWebhookCABundleInjector controller ... + kubectl logs --previous=true -n openshift-service-ca pod/service-ca-77fc4cc659-dp8dn service-ca-controller Error from server (BadRequest): previous terminated container "service-ca-controller" in pod "service-ca-77fc4cc659-dp8dn" not found + true + for ns in $(kubectl get namespace -o jsonpath='{.items..metadata.name}') ++ kubectl get pods -n openshift-storage -o name + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-storage pod/topolvm-controller-8456864f89-vg42d Name: topolvm-controller-8456864f89-vg42d Namespace: openshift-storage Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: topolvm-controller Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:33 +0000 Labels: app.kubernetes.io/name=topolvm-controller pod-template-hash=8456864f89 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.7/24"],"mac_address":"0a:58:0a:2a:00:07","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.7/24","gat... Status: Running IP: 10.42.0.7 IPs: IP: 10.42.0.7 Controlled By: ReplicaSet/topolvm-controller-8456864f89 Init Containers: self-signed-cert-generator: Container ID: cri-o://4d3eb9670384204928984c5c4142be7f5e984b21e213d0c36934f1460bfbd076 Image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b Image ID: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b Port: Host Port: Command: /usr/bin/bash -c openssl req -nodes -x509 -newkey rsa:4096 -subj '/DC=self_signed_certificate' -keyout /certs/tls.key -out /certs/tls.crt -days 3650 State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 15 Nov 2022 05:39:37 +0000 Finished: Tue, 15 Nov 2022 05:39:38 +0000 Ready: True Restart Count: 0 Environment: Mounts: /certs from certs (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnrvt (ro) Containers: topolvm-controller: Container ID: cri-o://f2f5d3385c45532b43affd2ccaf5fd88e8032d2606725c3d7a20690fe30c437d Image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad Image ID: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad Port: 9808/TCP Host Port: 0/TCP Command: /topolvm-controller --cert-dir=/certs State: Running Started: Tue, 15 Nov 2022 05:39:43 +0000 Ready: True Restart Count: 0 Requests: cpu: 250m memory: 250Mi Liveness: http-get http://:healthz/healthz delay=10s timeout=3s period=60s #success=1 #failure=3 Readiness: http-get http://:8080/metrics delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /certs from certs (rw) /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnrvt (ro) csi-provisioner: Container ID: cri-o://ee859c97d7835a634878a3bd6a56621bc6aa299ad4e046a6d83a21646a43bbc7 Image: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2 Image ID: registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2 Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock --enable-capacity --capacity-ownerref-level=2 --capacity-poll-interval=30s --feature-gates=Topology=true State: Running Started: Tue, 15 Nov 2022 05:39:47 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m memory: 100Mi Environment: POD_NAME: topolvm-controller-8456864f89-vg42d (v1:metadata.name) NAMESPACE: openshift-storage (v1:metadata.namespace) Mounts: /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnrvt (ro) csi-resizer: Container ID: cri-o://a08e4f6a696fda341f50830b1a2aee1a4e771813cfe04f5830a029366ed5ef06 Image: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898 Image ID: registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898 Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock State: Running Started: Tue, 15 Nov 2022 05:39:50 +0000 Ready: True Restart Count: 0 Environment: Mounts: /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnrvt (ro) liveness-probe: Container ID: cri-o://c2d9c8e2edf1ff2fb5351b7c35d3260c7b2ca7f29fab76bcfcb9e5bfa9037a62 Image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e Image ID: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock State: Running Started: Tue, 15 Nov 2022 05:39:53 +0000 Ready: True Restart Count: 0 Environment: Mounts: /run/topolvm from socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fnrvt (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: socket-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: certs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: kube-api-access-fnrvt: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 9m32s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. Normal Scheduled 9m10s default-scheduler Successfully assigned openshift-storage/topolvm-controller-8456864f89-vg42d to release-ci-ci-op-k5cwk1pv-7cb14 Normal Pulling 9m9s kubelet Pulling image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" Normal Pulled 9m6s kubelet Successfully pulled image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" in 2.863748219s Normal Created 9m6s kubelet Created container self-signed-cert-generator Normal Started 9m6s kubelet Started container self-signed-cert-generator Normal Pulling 9m5s kubelet Pulling image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" Normal Pulled 9m kubelet Successfully pulled image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" in 4.71614729s Normal Created 9m kubelet Created container topolvm-controller Normal Started 9m kubelet Started container topolvm-controller Normal Pulling 9m kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2" Normal Pulled 8m56s kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4b7d8035055a867b14265495bd2787db608b9ff39ed4e6f65ff24488a2e488d2" in 3.738667132s Normal Created 8m56s kubelet Created container csi-provisioner Normal Started 8m56s kubelet Started container csi-provisioner Normal Pulling 8m56s kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898" Normal Pulled 8m53s kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-external-resizer@sha256:ca34c46c4a4c1a4462b8aa89d1dbb5427114da098517954895ff797146392898" in 2.966738492s Normal Created 8m53s kubelet Created container csi-resizer Normal Started 8m53s kubelet Started container csi-resizer Normal Pulling 8m53s kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e" Normal Pulled 8m50s kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e" in 2.426765222s Normal Created 8m50s kubelet Created container liveness-probe Normal Started 8m50s kubelet Started container liveness-probe ++ kubectl get -n openshift-storage pod/topolvm-controller-8456864f89-vg42d -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-controller-8456864f89-vg42d topolvm-controller {"level":"info","ts":1668490783.806683,"logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"} {"level":"info","ts":1668490783.8071105,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/pod/mutate"} {"level":"info","ts":1668490783.8071997,"logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/pvc/mutate"} {"level":"info","ts":1668490783.8075063,"logger":"setup","msg":"starting manager"} {"level":"info","ts":1668490783.8075995,"logger":"controller-runtime.webhook.webhooks","msg":"Starting webhook server"} {"level":"info","ts":1668490783.8077338,"msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"} {"level":"info","ts":1668490783.8077922,"msg":"Starting server","kind":"health probe","addr":"[::]:8081"} {"level":"info","ts":1668490783.8079307,"logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"} {"level":"info","ts":1668490783.8080757,"logger":"controller-runtime.webhook","msg":"Serving webhook server","host":"","port":9443} {"level":"info","ts":1668490783.8082361,"logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"} I1115 05:39:43.908282 1 leaderelection.go:248] attempting to acquire leader lease openshift-storage/topolvm... I1115 05:39:43.916589 1 leaderelection.go:258] successfully acquired lease openshift-storage/topolvm {"level":"info","ts":1668490783.916864,"logger":"controller.persistentvolumeclaim","msg":"Starting EventSource","reconciler group":"","reconciler kind":"PersistentVolumeClaim","source":"kind source: *v1.PersistentVolumeClaim"} {"level":"info","ts":1668490783.9169023,"logger":"controller.persistentvolumeclaim","msg":"Starting Controller","reconciler group":"","reconciler kind":"PersistentVolumeClaim"} {"level":"info","ts":1668490783.916865,"logger":"controller.node","msg":"Starting EventSource","reconciler group":"","reconciler kind":"Node","source":"kind source: *v1.Node"} {"level":"info","ts":1668490783.91692,"logger":"controller.node","msg":"Starting Controller","reconciler group":"","reconciler kind":"Node"} {"level":"info","ts":1668490783.9169378,"logger":"controller.persistentvolumeclaim","msg":"Starting workers","reconciler group":"","reconciler kind":"PersistentVolumeClaim","worker count":1} {"level":"info","ts":1668490784.0177116,"logger":"controller.node","msg":"Starting workers","reconciler group":"","reconciler kind":"Node","worker count":1} {"level":"info","ts":1668490787.6671782,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668490787.668013,"logger":"driver.identity","msg":"GetPluginInfo","req":""} {"level":"info","ts":1668490787.6685586,"logger":"driver.identity","msg":"GetPluginCapabilities","req":""} {"level":"info","ts":1668490790.763843,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668490790.7643752,"logger":"driver.identity","msg":"GetPluginInfo","req":""} {"level":"info","ts":1668490790.7647316,"logger":"driver.identity","msg":"GetPluginCapabilities","req":""} {"level":"info","ts":1668490793.3267367,"logger":"driver.identity","msg":"GetPluginInfo","req":""} {"level":"info","ts":1668490812.7831068,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490812.7832792,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668490824.0981727,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668490842.782854,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490842.7831287,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668490872.7834747,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490872.7836246,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668490884.0988503,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668490902.7843084,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490902.7843828,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668490932.7847962,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490932.7853196,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668490944.099035,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668490962.7848113,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490962.7849047,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668490992.7857058,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668490992.7857738,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491004.1004128,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668491022.7866995,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491022.7867897,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491052.7875264,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491052.7879958,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491064.0990093,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668491082.7871583,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491082.7872715,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491112.7874482,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491112.7875457,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491124.0987122,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668491142.7877817,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491142.7878587,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491172.7884493,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491172.7887459,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491184.0980473,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668491202.788732,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491202.7888236,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491232.7896845,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491232.7897673,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491244.0990417,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668491262.7906432,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491262.790755,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491292.7917416,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491292.7920778,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} {"level":"info","ts":1668491304.0984716,"logger":"driver.identity","msg":"Probe","req":""} {"level":"info","ts":1668491322.7926288,"logger":"driver.controller","msg":"GetCapacity called","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{}}],"parameters":{"csi.storage.k8s.io/fstype":"xfs"},"accessible_topology":"segments:{key:\"topology.topolvm.cybozu.com/node\" value:\"release-ci-ci-op-k5cwk1pv-7cb14\"}"} {"level":"info","ts":1668491322.79274,"logger":"driver.controller","msg":"capability argument is not nil, but TopoLVM ignores it"} + kubectl logs --previous=true -n openshift-storage pod/topolvm-controller-8456864f89-vg42d topolvm-controller Error from server (BadRequest): previous terminated container "topolvm-controller" in pod "topolvm-controller-8456864f89-vg42d" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-controller-8456864f89-vg42d csi-provisioner W1115 05:39:47.661345 1 feature_gate.go:237] Setting GA feature gate Topology=true. It will be removed in a future release. I1115 05:39:47.661537 1 csi-provisioner.go:150] Version: v4.11.0-202209161337.p0.g86277ec.assembly.stream-0-gd6948ab-dirty I1115 05:39:47.661547 1 csi-provisioner.go:173] Building kube configs for running in cluster... I1115 05:39:47.664402 1 common.go:111] Probing CSI driver for readiness I1115 05:39:47.669462 1 csi-provisioner.go:289] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments I1115 05:39:49.670871 1 request.go:601] Waited for 1.992095465s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/events.k8s.io/v1?timeout=32s I1115 05:40:00.670437 1 request.go:601] Waited for 12.991575082s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/storage.k8s.io/v1?timeout=32s I1115 05:40:10.670527 1 request.go:601] Waited for 22.991543562s due to client-side throttling, not priority and fairness, request: GET:https://10.43.0.1:443/apis/route.openshift.io/v1?timeout=32s I1115 05:40:12.678146 1 csi-provisioner.go:442] using apps/v1/Deployment topolvm-controller as owner of CSIStorageCapacity objects I1115 05:40:12.678355 1 nodes.go:205] Started node topology worker I1115 05:40:12.681257 1 csi-provisioner.go:507] using the CSIStorageCapacity v1 API I1115 05:40:12.782229 1 controller.go:811] Starting provisioner controller topolvm.cybozu.com_topolvm-controller-8456864f89-vg42d_e645c805-37cc-4a91-86c7-a491751cccc3! I1115 05:40:12.782260 1 capacity.go:243] Starting Capacity Controller I1115 05:40:12.782280 1 clone_controller.go:66] Starting CloningProtection controller I1115 05:40:12.782298 1 clone_controller.go:82] Started CloningProtection controller I1115 05:40:12.782320 1 volume_store.go:97] Starting save volume queue I1115 05:40:12.782356 1 capacity.go:255] Started Capacity Controller I1115 05:40:12.882829 1 controller.go:860] Started provisioner controller topolvm.cybozu.com_topolvm-controller-8456864f89-vg42d_e645c805-37cc-4a91-86c7-a491751cccc3! + kubectl logs --previous=true -n openshift-storage pod/topolvm-controller-8456864f89-vg42d csi-provisioner Error from server (BadRequest): previous terminated container "csi-provisioner" in pod "topolvm-controller-8456864f89-vg42d" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-controller-8456864f89-vg42d csi-resizer I1115 05:39:50.761015 1 main.go:93] Version : v4.11.0-202209161337.p0.g2cea576.assembly.stream-0-g16073e9-dirty I1115 05:39:50.762759 1 common.go:111] Probing CSI driver for readiness I1115 05:39:50.765624 1 controller.go:120] Register Pod informer for resizer topolvm.cybozu.com I1115 05:39:50.765666 1 controller.go:255] Starting external resizer topolvm.cybozu.com + kubectl logs --previous=true -n openshift-storage pod/topolvm-controller-8456864f89-vg42d csi-resizer Error from server (BadRequest): previous terminated container "csi-resizer" in pod "topolvm-controller-8456864f89-vg42d" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-controller-8456864f89-vg42d liveness-probe I1115 05:39:53.326224 1 main.go:149] calling CSI driver to discover driver name I1115 05:39:53.327197 1 main.go:155] CSI driver name: "topolvm.cybozu.com" I1115 05:39:53.327215 1 main.go:183] ServeMux listening at "0.0.0.0:9808" + kubectl logs --previous=true -n openshift-storage pod/topolvm-controller-8456864f89-vg42d liveness-probe Error from server (BadRequest): previous terminated container "liveness-probe" in pod "topolvm-controller-8456864f89-vg42d" not found + true + for pod in $(kubectl get pods -n $ns -o name) + kubectl describe -n openshift-storage pod/topolvm-node-2tnh5 Name: topolvm-node-2tnh5 Namespace: openshift-storage Priority: 2000001000 Priority Class Name: system-node-critical Service Account: topolvm-node Node: release-ci-ci-op-k5cwk1pv-7cb14/10.0.0.2 Start Time: Tue, 15 Nov 2022 05:39:33 +0000 Labels: app=topolvm-node controller-revision-hash=58576f646 pod-template-generation=1 Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.42.0.5/24"],"mac_address":"0a:58:0a:2a:00:05","gateway_ips":["10.42.0.1"],"ip_address":"10.42.0.5/24","gat... odf-lvm.microshift.io/lvmd_config_sha256sum: cd811881ede06f69cba50cf9408e349ccf3edca76a9aec93d8cd35ba04e6033d Status: Pending IP: 10.42.0.5 IPs: IP: 10.42.0.5 Controlled By: DaemonSet/topolvm-node Init Containers: file-checker: Container ID: cri-o://a4656d515dbf6e259ef944a9ae945e263ea333263991044147fbfe621bf46b86 Image: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b Image ID: registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b Port: Host Port: Command: /usr/bin/bash -c until [ -f /etc/topolvm/lvmd.yaml ]; do echo waiting for lvmd config file; sleep 5; done State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 15 Nov 2022 05:39:37 +0000 Finished: Tue, 15 Nov 2022 05:39:37 +0000 Ready: True Restart Count: 0 Environment: Mounts: /etc/topolvm from lvmd-config-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc2nm (ro) Containers: lvmd: Container ID: Image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad Image ID: Port: Host Port: Command: /lvmd --config=/etc/topolvm/lvmd.yaml --container=true State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 250m memory: 250Mi Environment: Mounts: /etc/topolvm from lvmd-config-dir (rw) /run/lvmd from lvmd-socket-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc2nm (ro) topolvm-node: Container ID: Image: registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad Image ID: Port: 9808/TCP Host Port: 0/TCP Command: /topolvm-node --lvmd-socket=/run/lvmd/lvmd.socket State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 250m memory: 250Mi Liveness: http-get http://:healthz/healthz delay=10s timeout=3s period=60s #success=1 #failure=3 Environment: NODE_NAME: (v1:spec.nodeName) Mounts: /run/lvmd from lvmd-socket-dir (rw) /run/topolvm from node-plugin-dir (rw) /var/lib/kubelet/plugins/kubernetes.io/csi from csi-plugin-dir (rw) /var/lib/kubelet/pods from pod-volumes-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc2nm (ro) csi-registrar: Container ID: Image: registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb Image ID: Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock --kubelet-registration-path=/var/lib/kubelet/plugins/topolvm.cybozu.com/node/csi-topolvm.sock State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: Mounts: /registration from registration-dir (rw) /run/topolvm from node-plugin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc2nm (ro) liveness-probe: Container ID: Image: registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e Image ID: Port: Host Port: Args: --csi-address=/run/topolvm/csi-topolvm.sock State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: Mounts: /run/topolvm from node-plugin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc2nm (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: registration-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/plugins_registry/ HostPathType: Directory node-plugin-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/plugins/topolvm.cybozu.com/node HostPathType: DirectoryOrCreate csi-plugin-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/plugins/kubernetes.io/csi HostPathType: DirectoryOrCreate pod-volumes-dir: Type: HostPath (bare host directory volume) Path: /var/lib/kubelet/pods/ HostPathType: DirectoryOrCreate lvmd-config-dir: Type: ConfigMap (a volume populated by a ConfigMap) Name: lvmd Optional: false lvmd-socket-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: kube-api-access-jc2nm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: Burstable Node-Selectors: Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m11s default-scheduler Successfully assigned openshift-storage/topolvm-node-2tnh5 to release-ci-ci-op-k5cwk1pv-7cb14 Normal Pulling 9m10s kubelet Pulling image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" Normal Pulled 9m7s kubelet Successfully pulled image "registry.access.redhat.com/ubi8/openssl@sha256:8b41865d30b7947de68a9c1747616bce4efab4f60f68f8b7016cd84d7708af6b" in 2.862705905s Normal Created 9m7s kubelet Created container file-checker Normal Started 9m7s kubelet Started container file-checker Normal Pulling 9m7s kubelet Pulling image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" Normal Pulled 9m1s kubelet Successfully pulled image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" in 5.723956857s Normal Created 9m1s kubelet Created container lvmd Normal Started 9m1s kubelet Started container lvmd Normal Pulled 9m1s kubelet Container image "registry.redhat.io/odf4/odf-topolvm-rhel8@sha256:362c41177d086fc7c8d4fa4ac3bbedb18b1902e950feead9219ea59d1ad0e7ad" already present on machine Normal Created 9m1s kubelet Created container topolvm-node Normal Started 9m1s kubelet Started container topolvm-node Normal Pulling 9m1s kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb" Normal Pulled 8m57s kubelet Successfully pulled image "registry.redhat.io/openshift4/ose-csi-node-driver-registrar@sha256:3babcf219371017d92f8bc3301de6c63681fcfaa8c344ec7891c8e84f31420eb" in 3.640322104s Normal Created 8m57s kubelet Created container csi-registrar Normal Started 8m57s kubelet Started container csi-registrar Normal Pulling 8m57s kubelet Pulling image "registry.redhat.io/openshift4/ose-csi-livenessprobe@sha256:e4b0f6c89a12d26babdc2feae7d13d3f281ac4d38c24614c13c230b4a29ec56e" ++ kubectl get -n openshift-storage pod/topolvm-node-2tnh5 -o 'jsonpath={.spec.containers[*].name}' + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 lvmd Error from server (BadRequest): container "lvmd" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 lvmd Error from server (BadRequest): previous terminated container "lvmd" in pod "topolvm-node-2tnh5" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 topolvm-node Error from server (BadRequest): container "topolvm-node" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 topolvm-node Error from server (BadRequest): previous terminated container "topolvm-node" in pod "topolvm-node-2tnh5" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 csi-registrar Error from server (BadRequest): container "csi-registrar" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 csi-registrar Error from server (BadRequest): previous terminated container "csi-registrar" in pod "topolvm-node-2tnh5" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 liveness-probe Error from server (BadRequest): container "liveness-probe" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 liveness-probe Error from server (BadRequest): previous terminated container "liveness-probe" in pod "topolvm-node-2tnh5" not found + true + exit 1 {"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:79","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2022-11-15T05:48:45Z"} error: failed to execute wrapped command: exit status 1 INFO[2022-11-15T05:48:48Z] Step e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-cluster-up failed after 10m50s. INFO[2022-11-15T05:48:48Z] Step phase pre failed after 21m50s. INFO[2022-11-15T05:48:48Z] Running multi-stage phase post INFO[2022-11-15T05:48:48Z] Running step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-post. INFO[2022-11-15T05:50:38Z] Step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-post succeeded after 1m50s. INFO[2022-11-15T05:50:38Z] Running step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-cleanup-disk. INFO[2022-11-15T05:51:18Z] Step e2e-openshift-conformance-sig-scheduling-upi-gcp-rhel8-cleanup-disk succeeded after 40s. INFO[2022-11-15T05:51:18Z] Step phase post succeeded after 2m30s. INFO[2022-11-15T05:51:18Z] Releasing leases for test e2e-openshift-conformance-sig-scheduling INFO[2022-11-15T05:51:19Z] Ran for 36m18s ERRO[2022-11-15T05:51:19Z] Some steps failed: ERRO[2022-11-15T05:51:19Z] * could not run steps: step e2e-openshift-conformance-sig-scheduling failed: "e2e-openshift-conformance-sig-scheduling" pre steps failed: "e2e-openshift-conformance-sig-scheduling" pod "e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-cluster-up" failed: the pod ci-op-k5cwk1pv/e2e-openshift-conformance-sig-scheduling-openshift-microshift-e2e-wait-for-cluster-up failed after 10m48s (failed containers: test): ContainerFailed one or more containers exited Container test exited with code 1, reason Error --- kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 lvmd Error from server (BadRequest): previous terminated container "lvmd" in pod "topolvm-node-2tnh5" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 topolvm-node Error from server (BadRequest): container "topolvm-node" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 topolvm-node Error from server (BadRequest): previous terminated container "topolvm-node" in pod "topolvm-node-2tnh5" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 csi-registrar Error from server (BadRequest): container "csi-registrar" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 csi-registrar Error from server (BadRequest): previous terminated container "csi-registrar" in pod "topolvm-node-2tnh5" not found + true + for container in $(kubectl get -n $ns $pod -o jsonpath='{.spec.containers[*].name}') + kubectl logs -n openshift-storage pod/topolvm-node-2tnh5 liveness-probe Error from server (BadRequest): container "liveness-probe" in pod "topolvm-node-2tnh5" is waiting to start: PodInitializing + true + kubectl logs --previous=true -n openshift-storage pod/topolvm-node-2tnh5 liveness-probe Error from server (BadRequest): previous terminated container "liveness-probe" in pod "topolvm-node-2tnh5" not found + true + exit 1 {"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:79","func":"k8s.io/test-infra/prow/entrypoint.Options.Run","level":"error","msg":"Error executing test process","severity":"error","time":"2022-11-15T05:48:45Z"} error: failed to execute wrapped command: exit status 1 --- Link to step on registry info site: https://steps.ci.openshift.org/reference/openshift-microshift-e2e-wait-for-cluster-up Link to job on registry info site: https://steps.ci.openshift.org/job?org=openshift&repo=microshift&branch=main&test=e2e-openshift-conformance-sig-scheduling INFO[2022-11-15T05:51:19Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:utilizing_lease:executing_test:executing_multi_stage_test'