Uploaded image for project: 'Openshift sandboxed containers'
  1. Openshift sandboxed containers
  2. KATA-2965

controller-manager stuck after upgraded from 1.5.3 to 1.6.0

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Medium Medium
    • None
    • OSC 1.6.0
    • Operator
    • None
    • False
    • None
    • False
    • 0
    • 0

      Description

      <What were you trying to do that didn't work?>

      I was trying to deploy kataconfig with peerpods enabled after already doing that with 1.5.3.  I did not delete the openshift-sandboxed-containers-operator namespace in hopes I could not have to reconfigure everything.  kata-remote never became available

      Steps to reproduce

      <What actions did you take to hit the bug?>
      1. install 1.5.3
      2. configure peerpods
      3. delete kataconfig

      4. uninstall 1.5.3

      5. install 1.6.0 create kataconfig

      Expected result

      <What did you expect to happen?>

      kata-remote available, podvm image created, and configmap update with image

      Actual result

      <What actually happened?>

      kataconfig still in status still in progress.  kata runtime class is available, no kata-remote,  no peerpods daemonset or webhook pods, not image creation job, no image, blank AZURE_IMAGE_ID

      Impact

      <How badly does this interfere with using the software?>

      Env

      <Where was the bug found, i.e. OCP build, operator build, kata-containers build, cluster infra, test case id>

      OSC 1.5.3 > 1.6.0-41 OCP 4.15.10

      Additional helpful info

      <logs, screenshot, doc links, etc.>

      Tail of controller-manager log:

      2024-05-02T17:39:00Z    INFO    image-generator checkKeysPresentAndNotEmpty: key
       not present or has an empty value      {"key": "AZURE_RESOURCE_GROUP"}
      2024-05-02T17:39:00Z    INFO    image-generator error validating peer-pods-cm and peer-pods-secret      {"err": "validatePeerPodsConfigs: cannot find the required keys in peer-pods-cm ConfigMap"}
      2024-05-02T17:39:00Z    INFO    controllers.KataConfig  InProgress Condition set to PodVMImageJobFailed
      2024-05-02T17:39:00Z    ERROR   Reconciler error        {"controller": "kataconfig", "controllerGroup": "kataconfiguration.openshift.io", "controllerKind": "KataConfig", "KataConfig":

      {"name":"example-kataconfig"}

      , "namespace": "", "name": "example-kataconfig", "reconcileID": "f9f22a69-735d-4440-a838-686607643c9d", "error": "error validating peer-pods-cm and peer-pods-secret"}
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
              /remote-source/cachito-go-with-deps/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:324
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
              /remote-source/cachito-go-with-deps/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:265
      sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2

              /remote-source/cachito-go-with-deps/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:226

       

      kataconfig:

      spec:
        checkNodeEligibility: false
        enablePeerPods: true
        kataConfigPoolSelector: null
        logLevel: info
      status:
        conditions:
          - lastTransitionTime: '2024-05-02T16:30:15Z'
            message: Performing initial installation of kata on cluster
            reason: Installing
            status: 'True'
            type: InProgress
        kataNodes:
          installed:
            - cmead-ocp415-1-mgfm9-worker-eastus1-plnwh
            - cmead-ocp415-1-mgfm9-worker-eastus2-6fjzp
            - cmead-ocp415-1-mgfm9-worker-eastus3-qrzvv

              cmeadors@redhat.com Cameron Meadors
              cmeadors@redhat.com Cameron Meadors
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: