Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-24333

[RFE] podman 5: to edit the information under man containers.conf

    • Icon: Bug Bug
    • Resolution: Done-Errata
    • Icon: Normal Normal
    • rhel-9.5
    • rhel-9.4
    • podman
    • None
    • Moderate
    • 7
    • rhel-sst-container-tools
    • 3
    • False
    • Hide

      None

      Show
      None
    • None
    • Red Hat Enterprise Linux
    • RUN 252, RUN 253, RUN 254, RUN 255, RUN 256, RUN 257, RUN 258
    • None

      This is a request for an RFE on the information provided about num_locks inside man containers.conf

      The information provided is as below.

             num_locks=2048

             Number  of  locks  available for containers and pods. Each created container or pod consumes one lock. The
             default number available is 2048. If this is changed, a lock renumbering must be performed, using the pod‐
             man system renumber command.

       

      The num_locks value is defined inside container.conf file and The default entry is as below in the file

      #num_locks = 2048 (The entry is commented however we can un-comment it to our desired value)

      While we set a value for num_locks, based on the information we find in man pages 

      states

      **Each created container or pod consumes one lock**
      However during my testing I have found The number of locks consumed are not limited to pods or containers only but even the volume you create also consumes lock whether they are attached to a container or not

      How reproducible:

      Blow is the testing Done to validate the same. This can be used for reproducer

      1) I created some volumes using the below script

      #!/bin/bash

      1. Loop to create 2078 Docker volumes
        for i in {1..2078}

        ; do
        volume_name="test-volume$i"
        docker volume create "$volume_name"
        echo "Volume $volume_name created."
        done

      provided execute permission to the script and then ran it.

      While the script create some blank volume

      Volume test-volume2032 created.
      Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
      test-volume2033
      Volume test-volume2033 created.
      Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
      test-volume2034
      Volume test-volume2034 created.
      Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
      Error: allocating lock for new volume: allocation failed; exceeded num_locks (2048)
      Volume test-volume2035 created.
      Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
      Error: allocating lock for new volume: allocation failed; exceeded num_locks (2048)
      Volume test-volume2036 created.
      Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
      Error: allocating lock for new volume: allocation failed; exceeded num_locks (2048)

      Every blank volume occupies one lock hence after the test-volume2034 it gets error as below.
      Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
      Error: allocating lock for new volume: allocation failed; exceeded num_locks (2048)

      While I try to run the container it throws an error as

      [root@server ~]# podman run --name test-container registry.access.redhat.com/ubi8/httpd-24:latest
      Error: allocating lock for new container: allocation failed; exceeded num_locks (2048)
      [root@server ~]#

      Since there are not available locks to assign to the container.

      I want the information about num_locks to be edited as

      num_locks=2048

             Number  of  locks  available for containers and pods. Each created containers, pod and volume created consumes one lock.

      Regards

      Sachin

              rhn-support-jnovy Jindrich Novy
              rhn-support-sachisha Sachin Sharma
              Container Runtime Eng Bot Container Runtime Eng Bot
              Yuhui Jiang Yuhui Jiang
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: