Uploaded image for project: 'CPE Infrastructure'
  1. CPE Infrastructure
  2. CPE-3668

persistentvolume-controller, "fedora-ostree-content-volume-2" already bound to a different claim

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • Testable

      https://pagure.io/fedora-infrastructure/issue/12555

      I'm trying to migrate a few CoreOS related projects from DeploymentConfig to Deployment. The one I will be focusing in this example is the [fedora-ostree-pruner](https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/openshift-apps/fedora-ostree-pruner.yml).
      Currently number of the replicas is set to 1, after the build is complete, the replica which is being created remains in the `Pending` stage.

      ```
      adamsky@fedorapc  ~/Work/ansible  ↱ main ±  oc get pods
      NAME READY STATUS RESTARTS AGE
      fedora-ostree-pruner-build-1-build 0/1 Completed 0 2m11s
      fedora-ostree-pruner-f64475887-gjwt5 0/1 Pending 0 96s
      ```
      Upon somewhat of an investigation it was observed that the volume is already bound to a different claim:

      ```
      adamsky@fedorapc  ~/Work/ansible  ↱ main ±  oc describe pods
      ...
      Volumes:
      fedora-ostree-content-volume:
      Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
      ClaimName: fedora-ostree-content-volume
      ReadOnly: false
      ...
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Warning FailedScheduling 107s default-scheduler 0/8 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/8 nodes are available: 8 Preemption is not helpful for scheduling.
      ```

      This is true for staging... and as I already found out, also for production :alien:

      OpenShift Workloads Dashboard returns:
      ```
      Conditions:
      Type, Status, Updated, Reason,
      PodScheduled, False, May 12, 2025, 1:12 PM, Unschedulable

      Message:
      0/8 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/8 nodes are available: 8 Preemption is not helpful for scheduling.
      ```


      Describe what you would like us to do: Due to permission issues I am unable to dig deeper into the matter, it would be nice if I could get some assistance in figuring this out.


      When do you need this to be done by? (YYYY/MM/DD) : ASAP


              kfenzi.fedora Kevin Fenzi
              cle_bot CLE bot
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: