Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-18987

[podman5.1] Configurable logging for Podman healthcheck events

    • rhel-sst-container-tools
    • 3
    • False
    • Hide

      None

      Show
      None
    • None
    • Red Hat Enterprise Linux
    • RUN 252
    • None

      • When Podman is configured to emit events to journald (which has been done due to both previous issues with events stored on /run taking up all the free space on this tmpfs and the fact that containers with log_driver set to "journald" seems to require it to work properly) whenever a container healthcheck is executed it emits which is a wall of text that gets logged into the journal.

       

      • With a typical workload of around ~44 containers per server, a number taken from one of the users of our RHEL 8.9-based product, it means that the journald is constantly being flooded with a constant torrent of a less-than-useful messages.
        The worst offender is the part listing all of the labels set on any given image, as in production setup some images might have as much as ~30 of these set. 

       

      • request an option to disable emiting these on healthcheck execution, or at lest limit it to the events associated with things going wrong, preferably cutting the label dump part completely? Something like an additional option to containers.conf called, e.g. "healthcheck_events = false" 
      • Would it be also possible to do something like this?  I understand these are just a result of systemd reporting working with units, but while it makes sense for the regular services it gets a bit overwhelming when happening on a ~40 container host every few seconds.

              rhn-support-jnovy Jindrich Novy
              rhn-support-sbhavsar Sayali Bhavsar
              Container Runtime Eng Bot Container Runtime Eng Bot
              Alex Jia Alex Jia
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

                Created:
                Updated:
                Resolved: