Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-4810

MCO does not sync kubeAPIServerServingCAData to controllerconfig if there are not ready nodes

    XMLWordPrintable

Details

    • Rejected
    • False
    • Hide

      None

      Show
      None

    Description

      This is a clone of issue OCPBUGS-3882. The following is the description of the original issue:

      This bug is a backport clone of [Bugzilla Bug 2034883](https://bugzilla.redhat.com/show_bug.cgi?id=2034883). The following is the description of the original bug:

      Description of problem:

      Situation (starting point):

      • There is an ongoing change to the machine-config-daemon daemonset being applied by the machine-config-operator pod. It is waiting for the daemonset to roll out.
      • There are some nodes not ready, so daemonset rollout never ends and waiting on that ends in timeout error.

      Problem:

      • Machine-config-operator pods stops trying to reconcile stuff whenever it finds timeout error in waiting for the machine-config-daemon rollout
      • This implies that the `spec.kubeAPIServerServingCAData` field of controllerconfig/machine-config-controller object is not updated when the kube-apiserver-operator updates kube-apiserver-to-kubelet-client-ca configmap.
      • Without that field updated, a kube-apiserver-to-kubelet-client-ca change is never rolled out to the nodes.
      • That ultimately leads to cluster-wide unavailability of "oc logs", "oc rsh" etc. commands when the kube-apiserver-operator starts using a client cert signed by the new kube-apiserver-to-kubelet-client-ca to access kubelet ports.

      Version-Release number of MCO (Machine Config Operator) (if applicable):

      4.7.21

      Platform (AWS, VSphere, Metal, etc.): (not relevant)

      Are you certain that the root cause of the issue being reported is the MCO (Machine Config Operator)?
      (Y/N/Not sure): Y

      How reproducible:

      Always if the said conditions are met.

      Steps to Reproduce:
      1. Have some nodes not ready
      2. Force a change that requires machine-config-daemon daemonset rollout (I think that changing proxy settings would work for this)
      3. Wait until a new kube-apiserver-to-kubelet-client-ca is rolled out by kube-apiserver-operator

      Actual results:

      New kube-apiserver-to-kubelet-client-ca not forwarded to controllerconfig, kube-apiserver-to-kubelet-client-ca not deployed on nodes

      Expected results:

      kube-apiserver-to-kubelet-client-ca forwarded to controllerconfig, kube-apiserver-to-kubelet-client-ca deployed to nodes.

      Additional info:

      In comments

      Attachments

        Issue Links

          Activity

            People

              team-mco Team MCO
              openshift-crt-jira-prow OpenShift Prow Bot
              Rio Liu Rio Liu
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: