-
Story
-
Resolution: Obsolete
-
Major
-
None
-
None
Based on the collected profiling date, the majority of olm/catalog memory usage is on caches that olm builds to watch certain resources. There are a few options that can be utilized to improve the cache usage and reduce the overall memory footprint that OLM is using such as:
- Eliminate potentially unused/duplicate caches
- Reduce the size of cached resources (for example bundle/operator object)
- Filter cached resources to the ones that OLM actually cares and manages
On the cpu usage, most of spike usage is on copiedCSV operation which can be reduced by improving the computational logics that OLM uses to compare and trigger copiedCSV actions such as:
- Use hash comparison instead of comparing the entire object
- Reduce the size of copied CSV
- Eliminate the copied CSV altogether