Uploaded image for project: 'RHEL Testing'
  1. RHEL Testing
  2. RHELTEST-2499

[RHEL8] Podman run failed to start container - error setting cgroup config for procHooks process

    • False
    • Hide

      None

      Show
      None
    • False
    • None
    • rhel-container-tools

      From the runc code we can see this is a known issue in cgroupv1 and have a chance to happen when the system is very slow: https://github.com/opencontainers/cgroups/blob/9657f5a18b8d60a0f39fbb34d0cb7771e28e6278/fs/freezer.go#L37-L58

      This will happen in any tests that running in a very slow host The log looks like this one:

      [+0004s] not ok 1 [030] podman run - basic tests
      [+0004s] # tags: distro-integration
      [+0004s] # (from function `die' in file /usr/share/podman/test/system/helpers.bash, line 757,
      [+0004s] #  from function `run_podman' in file /usr/share/podman/test/system/helpers.bash, line 381,
      [+0004s] #  in test file /usr/share/podman/test/system/030-run.bats, line 41)
      [+0004s] #   `run_podman $expected_rc run $IMAGE "$@"' failed
      [+0004s] #
      [+0004s] # [13:59:41.725692314] # podman rm -t 0 --all --force --ignore
      [+0004s] #
      [+0004s] # [13:59:41.812052509] # podman ps --all --external --format {{.ID}} {{.Names}}
      [+0004s] #
      [+0004s] # [13:59:41.878191962] # podman images --all --format {{.Repository}}:{{.Tag}} {{.ID}}
      [+0004s] # [13:59:41.936804890] quay.io/libpod/testimage:20221018 f5a99120db64
      [+0004s] #
      [+0004s] # [13:59:42.306393135] # podman run quay.io/libpod/testimage:20221018 true
      [+0004s] # [13:59:43.652314210] Error: OCI runtime error: runc: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: unable to freeze
      [+0004s] # [13:59:43.658592963] [ rc=126 (** EXPECTED 0 **) ]
      [+0004s] # #/vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
      [+0004s] # #| FAIL: exit code is 126; expected 0
      [+0004s] # #\^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      [+0004s] # # [teardown]
      [+0004s] #
      [+0004s] # [13:59:43.676392325] # podman pod rm -t 0 --all --force --ignore
      [+0004s] #
      [+0004s] # [13:59:43.773388292] # podman rm -t 0 --all --force --ignore
      [+0004s] # [13:59:43.891317594] 17d571cf1f2955361f8aed813c0c9d5dda81b8faa6a955f6839adc410793ef82
      [+0004s] #
      [+0004s] # [13:59:43.894178233] # podman network prune --force
      [+0004s] #
      [+0004s] # [13:59:43.949146340] # podman volume rm -a -f
      

      This is already fixed in cgroupv2. Just report this issue for tracking it.

              Unassigned Unassigned
              ypu@redhat.com Yiqiao Pu
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved: