Uploaded image for project: 'Openshift sandboxed containers'
  1. Openshift sandboxed containers
  2. KATA-2579

sizeLimit attribute for ephemeral memory volumes don't work with OSC

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Medium Medium
    • None
    • None
    • None
    • None
    • False
    • None
    • False
    • Hide
      .Increasing the `sizeLimit` does not expand an ephemeral volume

      You cannot use the `sizeLimit` parameter in the pod specification to expand ephemeral volumes because the volume size default is 50% of the memory assigned to the sandboxed container.

      Workaround: Change the size by remounting the volume. For example, if the memory assigned to the sandboxed container is 6 GB and the ephemeral volume is mounted at `/var/lib/containers`, you can increase the size of this volume beyond the 3 GB default by running the following command:

      [source,terminal]
      ----
      $ mount -o remount,size=4G /var/lib/containers
      ----
      Show
      .Increasing the `sizeLimit` does not expand an ephemeral volume You cannot use the `sizeLimit` parameter in the pod specification to expand ephemeral volumes because the volume size default is 50% of the memory assigned to the sandboxed container. Workaround: Change the size by remounting the volume. For example, if the memory assigned to the sandboxed container is 6 GB and the ephemeral volume is mounted at `/var/lib/containers`, you can increase the size of this volume beyond the 3 GB default by running the following command: [source,terminal] ---- $ mount -o remount,size=4G /var/lib/containers ----
    • Known Issue
    • Done
    • 0
    • 0

      Ephemeral memory volumes have an option to specify the size, but this doesn't work with OSC (kata)

      example snippet

       volumes:
          - name: container-storage
            emptyDir:
              medium: Memory
              sizeLimit: 1Gi

      Steps to reproduce

      1. Create a Kata pod using sizeLimit
      2. Exec a shell in the pod and check the size of the volumemount to see if it matches with the sizeLimit
      3.

      Expected result

      The size of the volumemount should match the sizeLimit. 

      Actual result

      The size of the volumemount should match the sizeLimit. But it uses the default - 50% of total memory config.

      Impact

      Customers using build pipeline on OSC will not be able to set the size of the container storage (eg. /var/lib/containers)

      It won't be able to set the size limit for ephemeral volumes and have to rely on workarounds.

      Env

      <Where was the bug found, i.e. OCP build, operator build, kata-containers build, cluster infra, test case id>

      OCP 4.13 when working on https://issues.redhat.com/browse/KATA-2229

      Additional helpful info

      <logs, screenshot, doc links, etc.>

      Reproducer

      apiVersion: v1
      kind: Pod
      metadata:
        name: buildah-kata
        annotations:
          io.katacontainers.config.hypervisor.default_memory: "6144"
          io.katacontainers.config.hypervisor.default_vcpus: "2"
      spec:
        runtimeClassName: kata
        containers:
          - name: buildah
            image: quay.io/buildah/stable:v1.30
            command: ["sh", "-c"]
            args:
            - sleep infinity
            securityContext:
              privileged: true
            volumeMounts:
              - name: container-storage
                mountPath: /var/lib/containers
        volumes:
          - name: container-storage
            emptyDir:
              medium: Memory
              sizeLimit: 1Gi

      After deploying the pod, exec a shell and check the size of `/var/lib/containers`

      It'll be approx 3Gi (50% of the VM memory) and not 1Gi as requested via sizeLimit.

      This works fine for runc pods

              Unassigned Unassigned
              bpradipt Pradipta Banerjee
              Miriam Weiss Miriam Weiss
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: