Uploaded image for project: 'OpenShift Autoscaling'
  1. OpenShift Autoscaling
  2. AUTOSCALE-58

Make any necessary VPA Operator changes for in-place pod resize feature

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Done
    • Icon: Normal Normal
    • None
    • None
    • None
    • AUTOSCALE - Sprint 279

      As an OpenShift customer, I want a seamless installation and upgrade experience when transitioning to a new version of the VPA which will have the in-place pod resize feature.

      Please make sure the VPA Operator is able to install and manage the new controllers with this new feature.

      EDIT: AUG 27, 2025

      The feature upstream has been completed in this PR: https://github.com/kubernetes/autoscaler/pull/8115

      It involves introducing a new updateMode called "InPlaceOrRecreate".

      Right now in 4.20, it is behind a VPA level alpha feature gate, but in 1.34 (4.21), it the gate will be graduated to beta and we don't have to manually remove it downstream or something. Note that we will need to rebase this in order to remove the gate.

      We need to make sure that VPA Operator supports this new mode, and that the tests we run in CI for the operator is able to pass the new upstream test suites.

      Dod:

      1. VPA Operator is able to use InPlaceOrRecreate mode. (You can set the mode on VPA objects, and it will actually in place restart pods without eviction)
      2. Upstream InPlaceOrRecreate tests are actuated in CI, and pass.
      3. VPA itself has been rebased such that we do not need to manually enable any feature gates.
      4. Provide info to docs
      5. Provide info to QE (verify that existing upstream tests are sufficient)
      6. Make sure that all upstream e2e test suites are enabled in our downstream CI (and make a card to do it if it's not trivial to do in this card)

              rh-ee-macao Max Cao
              joelsmith.redhat Joel Smith
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: