Uploaded image for project: 'RHEL'
  1. RHEL
  2. RHEL-106895

Kubelet seen creating dnf files in UBI8 image under `/var/tmp/

Linking RHIVOS CVEs to...Migration: Automation ...Sync from "Extern...XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Normal Normal
    • None
    • None
    • ubi8-container
    • None
    • None
    • Low
    • rhel-container-tools
    • None
    • False
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • Unspecified
    • Unspecified
    • Unspecified
    • None

      Description of problem:

          Creating a plain pod using ubi8 images, we see that dnf files are created under  /var/tmp/ of the node. 
      
      sh-4.4$ ls -la /var/tmp
      total 0
      drwxrwxrwt. 1 root root 31 Jul 29 15:44 .
      drwxr-xr-x. 1 root root 17 Jul 17 06:21 ..
      drwx------. 2 1000 1000 63 Jul 29 15:44 dnf-1000-u0mrk1il
      
      
      Nothing is mounted in pod, nothing special is run with pod, node is not configured with OCI hooks or anything. 
      
      Stap probe script on syscalls shows execname and pid of kubelet creating files. 

      Version-Release number of selected component (if applicable):

          4.16

      How reproducible:

          Only in customer clusters. 

      Steps to Reproduce:

      $ cat ubi8-reproducer-new.yaml
      kind: Pod
      apiVersion: v1
      metadata:
        name: ubi8-new
        labels:
          app: ubi8-new
      spec:
        containers:
          - name: ubi8-new
            command:
              - /bin/sh
            image: registry.redhat.io/ubi8:5000/ubi8/ubi:8.10-1752733233
            imagePullPolicy: Always
            args:
              - '-c'
              - sleep infinity
            securityContext:
              allowPrivilegeEscalation: false
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                  - ALL     
      
      $ oc -n ubi8-test apply -f ubi8-reproducer-new.yaml
      pod/ubi8-new created
      
      $ oc -n ubi8-test get pods -o wide
      NAME                 READY   STATUS    RESTARTS   AGE     IP               NODE                                    NOMINATED NODE   READINESS GATES
      ubi8-new             1/1     Running   0          85s     10.140.181.166   worker2.vtenant4-dev.np.k8s.l0.ms.com   <none>           <none>
      ubi8-old             1/1     Running   0          4m32s   10.140.181.167   worker2.vtenant4-dev.np.k8s.l0.ms.com   <none>           <none>
      ubi8-test-affinity   1/1     Running   0          31m     10.140.182.47    worker1.vtenant4-dev.np.k8s.l0.ms.com   <none>           <none>
      
      $ oc -n ubi8-test rsh ubi8-new
      sh-4.4$
      sh-4.4$ ls -la /var/tmp
      total 0
      drwxrwxrwt. 1 root root 31 Jul 29 15:44 .
      drwxr-xr-x. 1 root root 17 Jul 17 06:21 ..
      drwx------. 2 1000 1000 63 Jul 29 15:44 dnf-1000-u0mrk1il
      sh-4.4${code}
      Actual results:
      {code:none}
         Always uid 1000, timestamp of files is after pod creation

      Expected results:

      No dnf files in /var/tmp    

      Additional info:

          Hard to pin down the reproducer but they have seen it happen regularly when an image is first used on a fresh node. After image is used once with a pod, its difficult to reproduce at will 
      
       
      Stap script run: 
      
      #!/usr/bin/env stap
      
      probe syscall.creat { if (pathname =~ ".*dnf-1000.*") printf("execname %s filename %s pid %d\n", execname(), pathname, pid()) }
      probe syscall.openat { if (filename =~ ".*dnf-1000.*") printf("execname %s filename %s pid %d\n", execname(), filename, pid()) }
      probe syscall.open { if (filename =~ ".*dnf-1000.*") printf("execname %s filename %s pid %d\n", execname(), filename, pid()) }
      
      probe begin
      {
       	printf("%-24s %6s %6s %6s %14s %s\n", "TIME", "UID", "PPID", "PID",
                  "COMM", "ARGS");
      }
      
      probe nd_syscall.execve.return
      {
       	printf("%-24s %6d %6d %6d %14s %s\n", ctime(gettimeofday_s()), uid(),
                  ppid(), pid(), execname(), cmdline_str());
      }
      
      
      
      Outputs seen:  
      
      execname kubelet filename "/var/lib/containers/storage/overlay/083ea222b4b0f08de973e6adc1e9f5e8eb5c28de6c9a4760d2ae8ecc39875d9b/diff/var/tmp/dnf-1000-noz2rujj" pid 3498
      
      execname kubelet filename "/var/lib/containers/storage/overlay/7190e45159076c579d2c425d750178b162717f28b0a3d10c6c49cf5b49988d81/diff/var/tmp/dnf-1000-cb3yxakg" pid 3498
      execname kubelet filename "/var/lib/containers/storage/overlay/06ab2002eb7b56b460cee03c7c3b7369a03d64a53246d77383beacafd6265dc6/diff/var/tmp/dnf-1000-hwde2m7s" pid 3498
      execname kubelet filename "/var/lib/containers/storage/overlay/9226b74729b90159314b5e6417df5d48655ee653849eb2ebfce4138913f7a919/diff/var/tmp/dnf-1000-j3gmc_xv" pid 3498
      execname kubelet filename "/var/lib/containers/storage/overlay/9226b74729b90159314b5e6417df5d48655ee653849eb2ebfce4138913f7a919/diff/var/tmp/dnf-1000-j3gmc_xv/locks" pid 3498
      execname kubelet filename "/var/lib/containers/storage/overlay/9226b74729b90159314b5e6417df5d48655ee653849eb2ebfce4138913f7a919/diff/var/tmp/dnf-1000-j3gmc_xv/locks/0ffb4738c561af3f105feb2ca32cdc67c5dc157d" pid 3498
      
      
      

              jpopelka@redhat.com Jiri Popelka
              rhn-support-rhowe Ryan Howe
              Container Runtime Eng Bot Container Runtime Eng Bot
              Container Runtime Bugs Bot Container Runtime Bugs Bot
              Votes:
              1 Vote for this issue
              Watchers:
              13 Start watching this issue

                Created:
                Updated:
                Resolved: