Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-41515

Removing old weak ciphers from security profile for Hypershift hosted cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Normal Normal
    • None
    • 4.14.z, 4.15.z, 4.16.0
    • HyperShift
    • Critical
    • Yes
    • Hypershift Sprint 259
    • 1
    • False
    • Hide

      None

      Show
      None
    • N/A
    • Release Note Not Required
    • Done

      This is a clone of issue OCPBUGS-38624. The following is the description of the original issue:

      This is a clone of issue OCPBUGS-30986. The following is the description of the original issue:

      Description of problem:

      After we applied the old tlsSecurityProfile to the Hypershift hosted clsuter, the apiserver ran into CrashLoopBackOff failure, this blocked our test.
          

      Version-Release number of selected component (if applicable):

      $ oc get clusterversion
      NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
      version   4.16.0-0.nightly-2024-03-13-061822   True        False         129m    Cluster version is 4.16.0-0.nightly-2024-03-13-061822
      
          

      How reproducible:

          always
          

      Steps to Reproduce:

          1. Specify KUBECONFIG with kubeconfig of the Hypershift management cluster
          2. hostedcluster=$( oc get -n clusters hostedclusters -o json | jq -r .items[].metadata.name)
          3. oc patch hostedcluster $hostedcluster -n clusters --type=merge -p '{"spec": {"configuration": {"apiServer": {"tlsSecurityProfile":{"old":{},"type":"Old"}}}}}'
      hostedcluster.hypershift.openshift.io/hypershift-ci-270930 patched
          4. Checked the tlsSecurityProfile,
          $ oc get HostedCluster $hostedcluster -n clusters -ojson | jq .spec.configuration.apiServer
      {
        "audit": {
          "profile": "Default"
        },
        "tlsSecurityProfile": {
          "old": {},
          "type": "Old"
        }
      }
          

      Actual results:

      One of the kube-apiserver of Hosted cluster ran into CrashLoopBackOff, stuck in this status, unable to complete the old tlsSecurityProfile configuration.
      
      $ oc get pods -l app=kube-apiserver  -n clusters-${hostedcluster}
      NAME                              READY   STATUS             RESTARTS      AGE
      kube-apiserver-5b6fc94b64-c575p   5/5     Running            0             70m
      kube-apiserver-5b6fc94b64-tvwtl   5/5     Running            0             70m
      kube-apiserver-84c7c8dd9d-pnvvk   4/5     CrashLoopBackOff   6 (20s ago)   7m38s
          

      Expected results:

          Applying the old tlsSecurityProfile should be successful.
          

      Additional info:

         This also can be reproduced on 4.14, 4.15. We have the last passed log of the test case as below:
        passed      API_Server       2024-02-19 13:34:25(UTC)    aws 	4.14.0-0.nightly-2024-02-18-123855   hypershift 	
        passed      API_Server	  2024-02-08 02:24:15(UTC)   aws 	4.15.0-0.nightly-2024-02-07-062935 	hypershift
        passed      API_Server	  2024-02-17 08:33:37(UTC)   aws 	4.16.0-0.nightly-2024-02-08-073857 	hypershift
      
      From the history of the test, it seems that some code changes were introduced in February that caused the bug.
          

              sjenning Seth Jennings
              openshift-crt-jira-prow OpenShift Prow Bot
              Ke Wang Ke Wang
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated:
                Resolved: