-
Bug
-
Resolution: Unresolved
-
Medium
-
None
-
OSC 1.5.0
-
None
-
False
-
None
-
False
-
-
-
0
-
0
Description
Any restart of caa-daemon (after configmap change to load new values) causing all peerpods on the same node to restart on new instances
Steps to reproduce
1. restart caa-daemon or upgrade OSC that well restart CAA daemonset
Expected result
notification or at least documentation note what should happen
Actual result
All peerpods on tha same node will restart w/o any notice
Impact
Since caa-daemon restart required not only upon AMI change or other minor configmap change, but for limit change as well (something that may be required by customers, given low default limit of 10 per node) it can be real customer use-case. Restart may impact stateful applications, especially AI workloads who keep intermediate results in a pod
- is related to
-
KATA-2963 1.5 docs for peerpods has CVM instance sizes listed
- Closed