-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
4.20.0, 4.20.z
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
It seems there is different behavior based on image 4.20.0-0.nightly-2025-09-10-005515 and 4.20.0-0.nightly-2025-09-10-095237.
This issue came as a debugging https://redhat-internal.slack.com/archives/CF8SMALS1/p1757403584565199
After installing a 4.19 cluster and then upgraded to 4.20.0-0.nightly-2025-09-10-005515 build and enabling the Tech preview the CVO stuck in progressing state.
mjoseph@mjoseph-mac Downloads % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.20.0-0.nightly-2025-09-10-005515 True False False 4s baremetal 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m cloud-controller-manager 4.20.0-0.nightly-2025-09-10-005515 True False False 3h16m cloud-credential 4.20.0-0.nightly-2025-09-10-005515 True False False 3h17m cluster-api 4.20.0-0.nightly-2025-09-10-005515 True False False 25m cluster-autoscaler 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m config-operator 4.20.0-0.nightly-2025-09-10-005515 True False False 3h14m console 4.20.0-0.nightly-2025-09-10-005515 True False False 3h1m control-plane-machine-set 4.20.0-0.nightly-2025-09-10-005515 True False False 3h10m csi-snapshot-controller 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m dns 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m etcd 4.20.0-0.nightly-2025-09-10-005515 True False True 3h12m NodeControllerDegraded: The master nodes not ready: node "ip-10-0-15-162.us-west-2.compute.internal" not ready since 2025-09-11 13:46:32 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) image-registry 4.20.0-0.nightly-2025-09-10-005515 True True False 3h4m Progressing: The deployment has not completed... ingress 4.20.0-0.nightly-2025-09-10-005515 True False False 3h3m insights 4.20.0-0.nightly-2025-09-10-005515 True False False 3h14m kube-apiserver 4.20.0-0.nightly-2025-09-10-005515 True True True 3h9m InstallerPodContainerWaitingDegraded: Pod "installer-9-ip-10-0-15-162.us-west-2.compute.internal" on node "ip-10-0-15-162.us-west-2.compute.internal" container "installer" is waiting since 2025-09-11 13:46:32 +0000 UTC because ContainerCreating... kube-controller-manager 4.20.0-0.nightly-2025-09-10-005515 True True True 3h9m InstallerPodContainerWaitingDegraded: Pod "installer-10-retry-1-ip-10-0-15-162.us-west-2.compute.internal" on node "ip-10-0-15-162.us-west-2.compute.internal" container "installer" is waiting since 2025-09-11 13:46:32 +0000 UTC because ContainerCreating... kube-scheduler 4.20.0-0.nightly-2025-09-10-005515 True True True 3h11m InstallerPodContainerWaitingDegraded: Pod "installer-9-retry-1-ip-10-0-15-162.us-west-2.compute.internal" on node "ip-10-0-15-162.us-west-2.compute.internal" container "installer" is waiting since 2025-09-11 13:46:48 +0000 UTC because ContainerCreating... kube-storage-version-migrator 4.20.0-0.nightly-2025-09-10-005515 True False False 25m machine-api 4.20.0-0.nightly-2025-09-10-005515 True False False 3h8m machine-approver 4.20.0-0.nightly-2025-09-10-005515 True False False 3h14m machine-config 4.20.0-0.nightly-2025-09-10-005515 True False False 3h12m marketplace 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m monitoring 4.20.0-0.nightly-2025-09-10-005515 True False True 4m54s UpdatingPrometheus: Prometheus "openshift-monitoring/k8s": SomePodsNotReady: shard 0: pod prometheus-k8s-0: 0/6 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match PersistentVolume's node affinity, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. network 4.20.0-0.nightly-2025-09-10-005515 True True True 3h16m DaemonSet "/openshift-multus/multus" rollout is not making progress - pod multus-gd6cz is in CrashLoopBackOff State... node-tuning 4.20.0-0.nightly-2025-09-10-005515 True False False 53m olm 4.20.0-0.nightly-2025-09-10-005515 True False True 41m CatalogdDeploymentCatalogdControllerManagerDegraded: error running hook function (index=1): argument "--feature-gates=" has conflicting values: existing="", new="APIV1MetasHandler=true"... openshift-apiserver 4.20.0-0.nightly-2025-09-10-005515 False True False 7m24s APIServicesAvailable: "apps.openshift.io.v1" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request... openshift-controller-manager 4.20.0-0.nightly-2025-09-10-005515 True False False 3h4m openshift-samples 4.20.0-0.nightly-2025-09-10-005515 True False False 3h4m operator-lifecycle-manager 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m operator-lifecycle-manager-catalog 4.20.0-0.nightly-2025-09-10-005515 True False False 3h13m operator-lifecycle-manager-packageserver 4.20.0-0.nightly-2025-09-10-005515 True False False 3h4m service-ca 4.20.0-0.nightly-2025-09-10-005515 True False False 3h14m storage 4.20.0-0.nightly-2025-09-10-005515 True False False 3h12m
And we can there is Dns operator logs which is caused by DNSNameResolver
mjoseph@mjoseph-mac Downloads % oc logs -n openshift-dns-operator dns-operator-55955b6b99-g8vnr Defaulted container "dns-operator" out of: dns-operator, kube-rbac-proxy I0911 13:49:00.552466 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I0911 13:49:00.563288 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "AzureWorkloadIdentity", "BootcNodeManagement", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageModeStatusReporting", "ImageStreamImportMode", "ImageVolume", "IngressControllerDynamicConfigurationManager", "IngressControllerLBSubnetsAWS", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "KMSv1", "MachineAPIMigration", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiDiskSetup", "MutatingAdmissionPolicy", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "SELinuxMount", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SignatureStores", "SigstoreImageVerification", "SigstoreImageVerificationPKI", "StoragePerformantSecurityPolicy", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMultiDisk", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}, Disabled:[]v1.FeatureGateName{"BootImageSkewEnforcement", "ClusterAPIInstall", "EventedPLEG", "Example2", "ExternalSnapshotMetadata", "MachineAPIOperatorDisableMachineHealthCheckController", "MultiArchInstallAzure", "ShortCertRotation", "VSphereMixedNodeEnv"}} time="2025-09-11T13:49:00Z" level=info msg="FeatureGates initializedknownFeatures[AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk AzureWorkloadIdentity BootImageSkewEnforcement BootcNodeManagement BuildCSIVolumes CPMSMachineNamePrefix ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration ConsolePluginContentSecurityPolicy DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageModeStatusReporting ImageStreamImportMode ImageVolume IngressControllerDynamicConfigurationManager IngressControllerLBSubnetsAWS InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider KMSv1 MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineConfigNodes ManagedBootImages ManagedBootImagesAWS ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MetricsCollectionProfiles MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PinnedImages PreconfiguredUDNAddresses ProcMountType RouteAdvertisements RouteExternalCertificate SELinuxMount ServiceAccountTokenNodeBinding SetEIPForNLBIngressController ShortCertRotation SignatureStores SigstoreImageVerification SigstoreImageVerificationPKI StoragePerformantSecurityPolicy TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VSphereMultiDisk VSphereMultiNetworks VolumeAttributesClass VolumeGroupSnapshot]" 2025-09-11T13:49:00Z INFO controller-runtime.metrics Starting metrics server 2025-09-11T13:49:00Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-09-11T13:49:00Z INFO Starting Controller {"controller": "dns_controller"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dnsnameresolver_controller", "source": "kind source: *v1alpha1.DNSNameResolver"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "dnsnameresolver_controller", "source": "kind source: *v1.EndpointSlice"} 2025-09-11T13:49:00Z INFO Starting Controller {"controller": "dnsnameresolver_controller"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-09-11T13:49:00Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-09-11T13:49:00Z INFO Starting Controller {"controller": "status_controller"} 2025-09-11T13:49:00Z ERROR controller-runtime.source.EventHandler if kind is a CRD, it should be installed before calling Start {"kind": "DNSNameResolver.network.openshift.io", "error": "failed to get restmapping: no matches for kind \"DNSNameResolver\" in group \"network.openshift.io\""} sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:71 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:33 sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:64 2025-09-11T13:49:00Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-09-11T13:49:00Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} time="2025-09-11T13:49:00Z" level=info msg="reconciling request: /default" 2025-09-11T13:49:10Z ERROR controller-runtime.source.EventHandler if kind is a CRD, it should be installed before calling Start {"kind": "DNSNameResolver.network.openshift.io", "error": "failed to get restmapping: no matches for kind \"DNSNameResolver\" in group \"network.openshift.io\""} sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:71 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2 /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:87 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:88 k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:33 sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:64 2025-09-11T13:49:20Z ERROR controller-runtime.source.EventHandler if kind is a CRD, it should be installed before calling Start {"kind": "DNSNameResolver.network.openshift.io", "error": "failed to get restmapping: no matches for kind \"DNSNameResolver\" in group \"network.openshift.io\""} sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:71 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2 /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:87 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:88 k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:33 sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:64 2025-09-11T13:49:30Z ERROR controller-runtime.source.EventHandler if kind is a CRD, it should be installed before calling Start {"kind": "DNSNameResolver.network.openshift.io", "error": "failed to get restmapping: no matches for kind \"DNSNameResolver\" in group \"network.openshift.io\""} sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:71 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2 /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:87 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:88 k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:33 sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:64
The same issue is hitting for rh-ee-bazhou also.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Create a 4.19 cluster
2. Upgrade it 4.20
3. Enable Tech preview
Actual results: Operator are down
Expected results: All operator should be working fine.
Additional info:
But when i tested again with latest 4.20.0-0.nightly-2025-09-10-095237 image the behavoir is different. After upgrading the cluster from 4.19 to the latest and enbaling the tech perview, i was also hitting the traceback in dns operartor pod, but later it the poid restarts the traceback is gone and all the oeprator came up without degrade.
I ddint see any bug fixes gone to the latest 4.20.0-0.nightly-2025-09-10-095237 in comparison with 4.20.0-0.nightly-2025-09-10-005515.
storage 4.20.0-0.nightly-2025-09-10-095237 True False False 3h38m mjoseph@mjoseph-mac Downloads % mjoseph@mjoseph-mac Downloads % mjoseph@mjoseph-mac Downloads % mjoseph@mjoseph-mac Downloads % oc logs -n openshift-dns-operator dns-operator-6dbd86d56f-tw5f7 Defaulted container "dns-operator" out of: dns-operator, kube-rbac-proxy I0911 14:22:51.463672 1 simple_featuregate_reader.go:171] Starting feature-gate-detector time="2025-09-11T14:22:51Z" level=info msg="FeatureGates initializedknownFeatures[AWSClusterHostedDNS AWSClusterHostedDNSInstall AWSDedicatedHosts AWSServiceLBNetworkSecurityGroup AdditionalRoutingCapabilities AdminNetworkPolicy AlibabaPlatform AutomatedEtcdBackup AzureClusterHostedDNSInstall AzureDedicatedHosts AzureMultiDisk AzureWorkloadIdentity BootImageSkewEnforcement BootcNodeManagement BuildCSIVolumes CPMSMachineNamePrefix ClusterAPIInstall ClusterAPIInstallIBMCloud ClusterMonitoringConfig ClusterVersionOperatorConfiguration ConsolePluginContentSecurityPolicy DNSNameResolver DualReplica DyanmicServiceEndpointIBMCloud DynamicResourceAllocation EtcdBackendQuota EventedPLEG Example Example2 ExternalOIDC ExternalOIDCWithUIDAndExtraClaimMappings ExternalSnapshotMetadata GCPClusterHostedDNS GCPClusterHostedDNSInstall GCPCustomAPIEndpoints GCPCustomAPIEndpointsInstall GatewayAPI GatewayAPIController HighlyAvailableArbiter ImageModeStatusReporting ImageStreamImportMode ImageVolume IngressControllerDynamicConfigurationManager IngressControllerLBSubnetsAWS InsightsConfig InsightsConfigAPI InsightsOnDemandDataGather IrreconcilableMachineConfig KMSEncryptionProvider KMSv1 MachineAPIMigration MachineAPIOperatorDisableMachineHealthCheckController MachineConfigNodes ManagedBootImages ManagedBootImagesAWS ManagedBootImagesAzure ManagedBootImagesvSphere MaxUnavailableStatefulSet MetricsCollectionProfiles MinimumKubeletVersion MixedCPUsAllocation MultiArchInstallAzure MultiDiskSetup MutatingAdmissionPolicy NetworkDiagnosticsConfig NetworkLiveMigration NetworkSegmentation NewOLM NewOLMCatalogdAPIV1Metas NewOLMOwnSingleNamespace NewOLMPreflightPermissionChecks NewOLMWebhookProviderOpenshiftServiceCA NoRegistryClusterOperations NodeSwap NutanixMultiSubnets OVNObservability OpenShiftPodSecurityAdmission PinnedImages PreconfiguredUDNAddresses ProcMountType RouteAdvertisements RouteExternalCertificate SELinuxMount ServiceAccountTokenNodeBinding SetEIPForNLBIngressController ShortCertRotation SignatureStores SigstoreImageVerification SigstoreImageVerificationPKI StoragePerformantSecurityPolicy TranslateStreamCloseWebsocketRequests UpgradeStatus UserNamespacesPodSecurityStandards UserNamespacesSupport VSphereConfigurableMaxAllowedBlockVolumesPerNode VSphereHostVMGroupZonal VSphereMixedNodeEnv VSphereMultiDisk VSphereMultiNetworks VolumeAttributesClass VolumeGroupSnapshot]" I0911 14:22:51.517483 1 event.go:377] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-dns-operator", Name:"dns-operator", UID:"", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AWSClusterHostedDNSInstall", "AWSDedicatedHosts", "AWSServiceLBNetworkSecurityGroup", "AdditionalRoutingCapabilities", "AdminNetworkPolicy", "AlibabaPlatform", "AutomatedEtcdBackup", "AzureClusterHostedDNSInstall", "AzureDedicatedHosts", "AzureMultiDisk", "AzureWorkloadIdentity", "BootcNodeManagement", "BuildCSIVolumes", "CPMSMachineNamePrefix", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ClusterVersionOperatorConfiguration", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DualReplica", "DyanmicServiceEndpointIBMCloud", "DynamicResourceAllocation", "EtcdBackendQuota", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GCPClusterHostedDNSInstall", "GCPCustomAPIEndpoints", "GCPCustomAPIEndpointsInstall", "GatewayAPI", "GatewayAPIController", "HighlyAvailableArbiter", "ImageModeStatusReporting", "ImageStreamImportMode", "ImageVolume", "IngressControllerDynamicConfigurationManager", "IngressControllerLBSubnetsAWS", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "IrreconcilableMachineConfig", "KMSEncryptionProvider", "KMSv1", "MachineAPIMigration", "MachineConfigNodes", "ManagedBootImages", "ManagedBootImagesAWS", "ManagedBootImagesAzure", "ManagedBootImagesvSphere", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiDiskSetup", "MutatingAdmissionPolicy", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NewOLMCatalogdAPIV1Metas", "NewOLMOwnSingleNamespace", "NewOLMPreflightPermissionChecks", "NewOLMWebhookProviderOpenshiftServiceCA", "NoRegistryClusterOperations", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PreconfiguredUDNAddresses", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "SELinuxMount", "ServiceAccountTokenNodeBinding", "SetEIPForNLBIngressController", "SignatureStores", "SigstoreImageVerification", "SigstoreImageVerificationPKI", "StoragePerformantSecurityPolicy", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereConfigurableMaxAllowedBlockVolumesPerNode", "VSphereHostVMGroupZonal", "VSphereMultiDisk", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}, Disabled:[]v1.FeatureGateName{"BootImageSkewEnforcement", "ClusterAPIInstall", "EventedPLEG", "Example2", "ExternalSnapshotMetadata", "MachineAPIOperatorDisableMachineHealthCheckController", "MultiArchInstallAzure", "ShortCertRotation", "VSphereMixedNodeEnv"}} 2025-09-11T14:22:51Z INFO controller-runtime.metrics Starting metrics server 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DNS"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.DaemonSet"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Service"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.ConfigMap"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dns_controller", "source": "kind source: *v1.Node"} 2025-09-11T14:22:51Z INFO Starting Controller {"controller": "dns_controller"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DNS"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.DaemonSet"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "status_controller", "source": "kind source: *v1.ClusterOperator"} 2025-09-11T14:22:51Z INFO Starting Controller {"controller": "status_controller"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dnsnameresolver_controller", "source": "kind source: *v1alpha1.DNSNameResolver"} 2025-09-11T14:22:51Z INFO Starting EventSource {"controller": "dnsnameresolver_controller", "source": "kind source: *v1.EndpointSlice"} 2025-09-11T14:22:51Z INFO Starting Controller {"controller": "dnsnameresolver_controller"} 2025-09-11T14:22:51Z INFO controller-runtime.metrics Serving metrics server {"bindAddress": "127.0.0.1:60000", "secure": false} 2025-09-11T14:22:51Z ERROR controller-runtime.source.EventHandler if kind is a CRD, it should be installed before calling Start {"kind": "DNSNameResolver.network.openshift.io", "error": "failed to get restmapping: no matches for kind \"DNSNameResolver\" in group \"network.openshift.io\""} sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:71 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1 /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:53 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:54 k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:33 sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:64 2025-09-11T14:22:52Z INFO Starting workers {"controller": "status_controller", "worker count": 1} 2025-09-11T14:22:52Z INFO Starting workers {"controller": "dns_controller", "worker count": 1} time="2025-09-11T14:22:52Z" level=info msg="reconciling request: /default" time="2025-09-11T14:22:52Z" level=info msg="updated dns cluster role /openshift-dns: &v1.ClusterRole{\n \tTypeMeta: {},\n \tObjectMeta: {Name: \"openshift-dns\", UID: \"615c5062-cf90-40f4-8962-c917ae9f647f\", ResourceVersion: \"9399\", CreationTimestamp: {Time: s\"2025-09-11 10:44:19 +0000 UTC\"}, ...},\n \tRules: []v1.PolicyRule{\n \t\t... // 2 identical elements\n \t\t{Verbs: {\"create\"}, APIGroups: {\"authentication.k8s.io\"}, Resources: {\"tokenreviews\"}},\n \t\t{Verbs: {\"create\"}, APIGroups: {\"authorization.k8s.io\"}, Resources: {\"subjectaccessreviews\"}},\n+ \t\t{\n+ \t\t\tVerbs: []string{\"get\", \"list\", \"watch\"},\n+ \t\t\tAPIGroups: []string{\"network.openshift.io\"},\n+ \t\t\tResources: []string{\"dnsnameresolvers\"},\n+ \t\t},\n+ \t\t{\n+ \t\t\tVerbs: []string{\"get\", \"update\", \"patch\"},\n+ \t\t\tAPIGroups: []string{\"network.openshift.io\"},\n+ \t\t\tResources: []string{\"dnsnameresolvers/status\"},\n+ \t\t},\n \t},\n \tAggregationRule: nil,\n }\n" time="2025-09-11T14:22:52Z" level=info msg="updated configmap openshift-dns/dns-default: &v1.ConfigMap{\n \tTypeMeta: {},\n \tObjectMeta: {Name: \"dns-default\", Namespace: \"openshift-dns\", UID: \"86fce60a-9219-4e87-af3c-30d6332d54c4\", ResourceVersion: \"9458\", ...},\n \tImmutable: nil,\n- \tData: map[string]string{\n- \t\t\"Corefile\": (\n- \t\t\t\"\"\"\n- \t\t\t.:5353 {\n- \t\t\t bufsize 1232\n- \t\t\t errors\n- \t\t\t log . {\n- \t\t\t class error\n- \t\t\t }\n- \t\t\t health {\n- \t\t\t lameduck 20s\n- \t\t\t }\n- \t\t\t ready\n- \t\t\t kubernetes cluster.local in-addr.arpa ip6.arpa {\n- \t\t\t pods insecure\n- \t\t\t fallthrough in-addr.arpa ip6.arpa\n- \t\t\t }\n- \t\t\t prometheus 127.0.0.1:9153\n- \t\t\t forward . /etc/resolv.conf {\n- \t\t\t policy sequential\n- \t\t\t }\n- \t\t\t cache 900 {\n- \t\t\t denial 9984 30\n- \t\t\t }\n- \t\t\t reload\n- \t\t\t}\n- \t\t\thostname.bind:5353 {\n- \t\t\t chaos\n- \t\t\t}\n- \t\t\t\"\"\"\n- \t\t),\n- \t},\n+ \tData: map[string]string{\n+ \t\t\"Corefile\": (\n+ \t\t\t\"\"\"\n+ \t\t\t.:5353 {\n+ \t\t\t bufsize 1232\n+ \t\t\t errors\n+ \t\t\t log . {\n+ \t\t\t class error\n+ \t\t\t }\n+ \t\t\t health {\n+ \t\t\t lameduck 20s\n+ \t\t\t }\n+ \t\t\t ready\n+ \t\t\t kubernetes cluster.local in-addr.arpa ip6.arpa {\n+ \t\t\t pods insecure\n+ \t\t\t fallthrough in-addr.arpa ip6.arpa\n+ \t\t\t }\n+ \t\t\t prometheus 127.0.0.1:9153\n+ \t\t\t forward . /etc/resolv.conf {\n+ \t\t\t policy sequential\n+ \t\t\t }\n+ \t\t\t cache 900 {\n+ \t\t\t denial 9984 30\n+ \t\t\t }\n+ \t\t\t reload\n+ \t\t\t ocp_dnsnameresolver {\n+ \t\t\t namespaces openshift-ovn-kubernetes \n+ \t\t\t }\n+ \t\t\t}\n+ \t\t\thostname.bind:5353 {\n+ \t\t\t chaos\n+ \t\t\t}\n+ \t\t\t\"\"\"\n+ \t\t),\n+ \t},\n \tBinaryData: nil,\n }\n" time="2025-09-11T14:22:53Z" level=info msg="reconciling request: /default" 2025-09-11T14:23:01Z ERROR controller-runtime.source.EventHandler if kind is a CRD, it should be installed before calling Start {"kind": "DNSNameResolver.network.openshift.io", "error": "failed to get restmapping: no matches for kind \"DNSNameResolver\" in group \"network.openshift.io\""} sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:71 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2 /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:87 k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/loop.go:88 k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel /dns-operator/vendor/k8s.io/apimachinery/pkg/util/wait/poll.go:33 sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1 /dns-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/source/kind.go:64 mjoseph@mjoseph-mac Downloads % mjoseph@mjoseph-mac Downloads %
mjoseph@mjoseph-mac Downloads % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.20.0-0.nightly-2025-09-10-095237 True False False 3h23m baremetal 4.20.0-0.nightly-2025-09-10-095237 True False False 3h45m cloud-controller-manager 4.20.0-0.nightly-2025-09-10-095237 True False False 3h48m cloud-credential 4.20.0-0.nightly-2025-09-10-095237 True False False 3h48m cluster-api 4.20.0-0.nightly-2025-09-10-095237 True False False 4m35s cluster-autoscaler 4.20.0-0.nightly-2025-09-10-095237 True False False 3h45m config-operator 4.20.0-0.nightly-2025-09-10-095237 True False False 3h46m console 4.20.0-0.nightly-2025-09-10-095237 True False False 3h31m control-plane-machine-set 4.20.0-0.nightly-2025-09-10-095237 True False False 3h44m csi-snapshot-controller 4.20.0-0.nightly-2025-09-10-095237 True False False 3h46m dns 4.20.0-0.nightly-2025-09-10-095237 True True False 3h45m DNS "default" reports Progressing=True: "Have 4 available DNS pods, want 5.\nHave 5 available node-resolver pods, want 6." etcd 4.20.0-0.nightly-2025-09-10-095237 True False False 3h44m image-registry 4.20.0-0.nightly-2025-09-10-095237 True False False 3h35m ingress 4.20.0-0.nightly-2025-09-10-095237 True False False 3h37m insights 4.20.0-0.nightly-2025-09-10-095237 True False False 3h45m kube-apiserver 4.20.0-0.nightly-2025-09-10-095237 True True False 3h41m NodeInstallerProgressing: 3 nodes are at revision 8; 0 nodes have achieved new revision 9 kube-controller-manager 4.20.0-0.nightly-2025-09-10-095237 True False False 3h41m kube-scheduler 4.20.0-0.nightly-2025-09-10-095237 True True False 3h43m NodeInstallerProgressing: 3 nodes are at revision 6; 0 nodes have achieved new revision 8 kube-storage-version-migrator 4.20.0-0.nightly-2025-09-10-095237 True False False 28m machine-api 4.20.0-0.nightly-2025-09-10-095237 True False False 3h42m machine-approver 4.20.0-0.nightly-2025-09-10-095237 True False False 3h46m machine-config 4.20.0-0.nightly-2025-09-10-095237 True False False 3h44m marketplace 4.20.0-0.nightly-2025-09-10-095237 True False False 3h45m monitoring 4.20.0-0.nightly-2025-09-10-095237 True False False 3h34m network 4.20.0-0.nightly-2025-09-10-095237 True True False 3h48m DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)... node-tuning 4.20.0-0.nightly-2025-09-10-095237 True False False 54m olm 4.20.0-0.nightly-2025-09-10-095237 True False False 28m openshift-apiserver 4.20.0-0.nightly-2025-09-10-095237 True True False 3h37m APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available openshift-controller-manager 4.20.0-0.nightly-2025-09-10-095237 True False False 3h37m openshift-samples 4.20.0-0.nightly-2025-09-10-095237 True False False 3h35m operator-lifecycle-manager 4.20.0-0.nightly-2025-09-10-095237 True False False 3h45m operator-lifecycle-manager-catalog 4.20.0-0.nightly-2025-09-10-095237 True False False 3h45m operator-lifecycle-manager-packageserver 4.20.0-0.nightly-2025-09-10-095237 True False False 3h35m service-ca 4.20.0-0.nightly-2025-09-10-095237 True False False 3h46m storage 4.20.0-0.nightly-2025-09-10-095237 True True False 3h45m AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods mjoseph@mjoseph-mac Downloads % mjoseph@mjoseph-mac Downloads % oc logs -n openshift-dns-operator dns-operator-6dbd86d56f-tw5f7 error: error from server (NotFound): pods "dns-operator-6dbd86d56f-tw5f7" not found in namespace "openshift-dns-operator" mjoseph@mjoseph-mac Downloads % oc get po -n openshift-dns-operator NAME READY STATUS RESTARTS AGE dns-operator-6dbd86d56f-c7fl5 2/2 Running 0 7m42s
after this there will not be any operator degraded state and its working fine.