[892135439] Upstream Reporter: Martin Polden
Upstream issue status: Open
Upstream description:
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Unable to stop or remove a Podman because of stale exec sessions. The PID associated with the exec session is reused due to PID wrapping in root PID namespace, causing Podman to believe the exec session is still alive.
Steps to reproduce the issue:
- We have a running Podman container.
- We execute podman exec periodically in the container, approximately once every minute. Our use-case for this is collecting metrics.
- When running podman exec, Podman stores an "exec session" containing metadata (e.g. PID of the spawned process) in its state database.
- In some cases we can end up with stale exec sessions, e.g. podman exec can be killed due to a timeout and is then unable to clean up the exec session.
- The container will then have stale exec sessions (the ExecIDs array in podman inspect <ctr> grows) in its database, referring to processes that are no longer running
- When stopping or removing a container, Podman checks if the PIDs of any exec session still refer to a running process. It retrieves the PIDs from its state database in /data/containers/graph/libpod/bolt_state.db
- However, the persisted PID may now refer to a different running process. This is due to the fact can PIDs wrap around after reaching the limit specified in /proc/sys/kernel/pid_max.
- When the PID persisted in the exec session is reused by another process, Podman believes the exec session is still active and the container can no longer be stopped or removed.
- Max current PID and limit on a system where we're triggering this bug:
$ ps aux | awk '{print $2}' | sort -rn | head -1 $ 399578 $ cat /proc/sys/kernel/pid_max 409600- But why are we reaching the limit so quickly? (We see the "improper state" issue every other day)
My guess is that this is due to PID namespaces. A container has its own PID namespace which maps into the root PID name space, and the number of possible PIDs in this namespace is smaller than the total number of possible PIDs on the host.
The PID in the exec session is the PID in the root name space.Describe the results you received:
Depending on when the PID reuse happens, either stop or rm fails.
podman stop <ctr> fails with:
Error: container 5c5925673e244190340d1af86cb2bb2d9438691e9a48e883d77fedf09d87222a has active exec sessions, refusing to clean up: container state improperpodman rm <ctr> fails with:
Error: cannot remove container 86795917878f6131ca98b45a5e7a87b32fdb9121a4359547b6b007199d115b99 as it has active exec sessions: container state improperDescribe the results you expected:
That podman stop and podman rm succeed.
Additional information you deem important (e.g. issue happens only occasionally):
If the container cannot be removed, the only solution is to restart it first with podman restart <ctr> \&\& podman stop <ctr> \&\& podman rm <ctr>.
Output of podman version:
Version: 2.2.1 API Version: 2 Go Version: go1.14.7 Built: Mon Feb 8 21:19:06 2021 OS/Arch: linux/amd64Output of podman info --debug:
host: arch: amd64 buildahVersion: 1.18.0 cgroupManager: systemd cgroupVersion: v1 conmon: package: conmon-2.0.22-3.module+el8.3.1+9857+68fb1526.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.22, commit: a40e3092dbe499ea1d85ab339caea023b74829b9' cpus: 24 distribution: distribution: '"rhel"' version: "8.3" eventLogger: file hostname: <snip> idMappings: gidmap: null uidmap: null kernel: 4.18.0-240.15.1.el8_3.x86_64 linkmode: dynamic memFree: 267927552 memTotal: 25018028032 ociRuntime: name: runc package: runc-1.0.0-70.rc92.module+el8.3.1+9857+68fb1526.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.2-dev' os: linux remoteSocket: path: /run/podman/podman.sock rootless: false slirp4netns: executable: "" package: "" version: "" swapFree: 0 swapTotal: 0 uptime: 366h 4m 16.74s (Approximately 15.25 days) registries: search: - registry.access.redhat.com - registry.redhat.io - docker.io store: configFile: /etc/containers/storage.conf containerStore: number: 1 paused: 0 running: 1 stopped: 0 graphDriverName: overlay graphOptions: overlay.mountopt: nodev,metacopy=on graphRoot: /data/containers/graph graphStatus: Backing Filesystem: xfs Native Overlay Diff: "false" Supports d_type: "true" Using metacopy: "true" imageStore: number: 1 runRoot: /data/containers/run volumePath: /data/containers/graph/volumes version: APIVersion: "2" Built: 1612819146 BuiltTime: Mon Feb 8 21:19:06 2021 GitCommit: "" GoVersion: go1.14.7 OsArch: linux/amd64 Version: 2.2.1Package info (e.g. output of rpm -q podman or apt list podman):
podman-2.2.1-7.module+el8.3.1+9857+68fb1526.x86_64Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
Yes. (We're running the latest version available in RHEL 8.3)
Additional environment details (AWS, VirtualBox, physical, etc.):
Physical.
Upstream URL: https://github.com/containers/conmon/issues/260
- links to