-
Epic
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
resource-allocations-at-guest-level-only
-
Incidents & Support
-
False
-
-
False
-
-
In Progress
-
VIRTSTRAT-328 - Virtualization IaaS like management
-
-
33% To Do, 0% In Progress, 67% Done
OCP/Telco Definition of Done
Epic Template descriptions and documentation.{}
Epic Goal
VMIs provide two ways of setting memory.
`vmi.spec.domain.memory.guest`: sets the guest memory, which is the amount of memory the guest can "see", or is aware of.
`vmi.spec.domain.resources.requests[memory]`: sets the amount of memory allocated to the virt-launcher pod. This amount generally consists of guest memory + virt infra overhead (I'm not mentioning over-commitment here for simplicity).
Currently, `vmi.spec.domain.resources.requests[memory]` is always being populated, even if not being set by the user. If it's not being set by the user, that means that guest memory is set, then it's being copied to the guest memory field.
The expectation is that when a VM / VMI is created by the GUI, only the guest configs would change (guest memory / CPU topology) and the cpu / memory requests / limits would stay blank.
Why is this important?
The current behavior is problematic. The VMI should reflect the declarative desires of the user. If the user asked some amount of guest memory, but didn't specify any memory requests, then it means that he doesn't care about it, therefore it shouldn't be reflected at the VMI level.
This is beneficial in two main aspects:
1) The user is less confused - only what he specified is reflected as the desired state.
2) Kubevirt would have the opportunity to provide new optimizations in the future. For example, if we would improve the virt infra overhead calculation (which currently is not accurate), then after an upgrade the virt-launcher pod would be able to update the memory request amount that is not dictated by the VMI.
Scenarios
- ...
Acceptance Criteria
- When creating a VM / VMI via the GUI, the only fields that would be configured are the guest memory / CPU topology. The memory / cpu request / limits need to be defined only at the pod level (by virt-controller).
- If some users do want to edit the memory / cpu request / limits explicitly, they need to do so under "advanced settings".
Dependencies (internal and external)
- ...
Previous Work (Optional):
- …
Open questions::
- …
Done Checklist
- CI - CI is running, tests are automated and merged.
- Release Enablement <link to Feature Enablement Presentation>
- DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
- DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
- DEV - Downstream build attached to advisory: <link to errata>
- QE - Test plans in Polarion: <link or reference to Polarion>
- QE - Automated tests merged: <link or reference to automated tests>
- DOC - Downstream documentation merged: <link to meaningful PR>
- mentioned on