Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-8994

Black Duck Connector operator gets oom-killed

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • 4.8
    • ISV Operators
    • None
    • None
    • Unspecified
    • If docs needed, set a value

      Description of problem:

      I deployed the blackduck-connector-operator v1.0.0. As soon as I create an instance of the BlackduckConnector CRD, the blackduck-connector-operator-controller-manager process gets OOM-killed, and the Pod goes into CrashLoopBackOff.

      The cluster is a fresh single node openshift with nothing else running. The openshift console's memory description is "15.92 GiB available of 31.4 GiB".

      Version-Release number of selected component (if applicable):

      OCP 4.8.12 as Single Node OpenShift
      blackduck-connector-operator v1.0.0

      How reproducible:

      always

      Steps to Reproduce:
      1. Install blackduck-connector-operator v1.0.0 through the console's Operators section.
      2. Use the console to create an instance of the BlackduckConnector CRD. I filled in a custom externalBlackDuck section with domain, user, and password.

      Actual results:

      The operator Pod goes into CrashLoopBackOff. It does manage to create some resources first, but I'm not sure if it finishes or had more resources left to create.

      Expected results:

      Not that.

      Additional info:

          1. Container status:

      containerStatuses:

      • restartCount: 7
        started: false
        ready: false
        name: manager
        state:
        waiting:
        reason: CrashLoopBackOff
        message: >-
        back-off 5m0s restarting failed container=manager
        pod=blackduck-connector-operator-controller-manager-75bfd6f6db4nv9q_openshift-operators(67297eb1-2750-426f-9674-d4a20f10d58f)
        imageID: >-
        registry.connect.redhat.com/blackducksoftware/blackduck-connector-operator@sha256:688d3d8d6380a4332c1df78cc0de521b8769b6518e9725a6943521d666c45a79
        image: >-
        registry.connect.redhat.com/blackducksoftware/blackduck-connector-operator@sha256:688d3d8d6380a4332c1df78cc0de521b8769b6518e9725a6943521d666c45a79
        lastState:
        terminated:
        exitCode: 137
        reason: OOMKilled
        startedAt: '2021-10-05T21:12:11Z'
        finishedAt: '2021-10-05T21:12:43Z'
        containerID: >-
        cri-o://89018b4dd4bce426d554eafb7111edd513c7794c37db0273d8ec10c8c74c856a
        containerID: 'cri-o://89018b4dd4bce426d554eafb7111edd513c7794c37db0273d8ec10c8c74c856a'
          1. Pod logs:

      $ oc logs -n openshift-operators blackduck-connector-operator-controller-manager-75bfd6f6db4nv9q

      {"level":"info","ts":1633467315.2288194,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.3.0","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} {"level":"info","ts":1633467315.2293358,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}

      I1005 20:55:16.384159 1 request.go:645] Throttling request took 1.045982142s, request: GET:https://172.30.0.1:443/apis/template.openshift.io/v1?timeout=32s

      {"level":"info","ts":1633467318.139686,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"} {"level":"info","ts":1633467318.140366,"logger":"helm.controller","msg":"Watching resource","apiVersion":"charts.synopsys.com/v1alpha1","kind":"BlackduckConnector","namespace":"","reconcilePeriod":"1m0s"}

      I1005 20:55:18.140722 1 leaderelection.go:243] attempting to acquire leader lease openshift-operators/blackduck-connector-operator...

      {"level":"info","ts":1633467318.1407757,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}

      [mhrivnak@roadie Downloads]$ oc logs -f -n openshift-operators blackduck-connector-operator-controller-manager-75bfd6f6db4nv9q
      [mhrivnak@roadie Downloads]$ oc logs -f -n openshift-operators blackduck-connector-operator-controller-manager-75bfd6f6db4nv9q

      {"level":"info","ts":1633467315.2288194,"logger":"cmd","msg":"Version","Go Version":"go1.15.5","GOOS":"linux","GOARCH":"amd64","helm-operator":"v1.3.0","commit":"1abf57985b43bf6a59dcd18147b3c574fa57d3f6"} {"level":"info","ts":1633467315.2293358,"logger":"cmd","msg":"WATCH_NAMESPACE environment variable not set. Watching all namespaces.","Namespace":""}

      I1005 20:55:16.384159 1 request.go:645] Throttling request took 1.045982142s, request: GET:https://172.30.0.1:443/apis/template.openshift.io/v1?timeout=32s

      {"level":"info","ts":1633467318.139686,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"127.0.0.1:8080"} {"level":"info","ts":1633467318.140366,"logger":"helm.controller","msg":"Watching resource","apiVersion":"charts.synopsys.com/v1alpha1","kind":"BlackduckConnector","namespace":"","reconcilePeriod":"1m0s"}

      I1005 20:55:18.140722 1 leaderelection.go:243] attempting to acquire leader lease openshift-operators/blackduck-connector-operator...

      {"level":"info","ts":1633467318.1407757,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}

      I1005 20:55:35.624799 1 leaderelection.go:253] successfully acquired lease openshift-operators/blackduck-connector-operator

      {"level":"info","ts":1633467335.6266904,"logger":"controller-runtime.manager.controller.blackduckconnector-controller","msg":"Starting EventSource","source":"kind source: charts.synopsys.com/v1alpha1, Kind=BlackduckConnector"} {"level":"info","ts":1633467335.7286408,"logger":"controller-runtime.manager.controller.blackduckconnector-controller","msg":"Starting Controller"} {"level":"info","ts":1633467335.7287776,"logger":"controller-runtime.manager.controller.blackduckconnector-controller","msg":"Starting workers","worker count":8}

      I1005 20:57:48.879034 1 request.go:645] Throttling request took 1.041674281s, request: GET:https://172.30.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s

      {"level":"info","ts":1633467473.530251,"logger":"controller-runtime.manager.controller.blackduckconnector-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=Service"} {"level":"info","ts":1633467473.7306275,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"charts.synopsys.com/v1alpha1","ownerKind":"BlackduckConnector","apiVersion":"v1","kind":"Service"} {"level":"info","ts":1633467473.7319126,"logger":"controller-runtime.manager.controller.blackduckconnector-controller","msg":"Starting EventSource","source":"kind source: apps/v1, Kind=Deployment"} {"level":"info","ts":1633467474.4329884,"logger":"helm.controller","msg":"Watching dependent resource","ownerApiVersion":"charts.synopsys.com/v1alpha1","ownerKind":"BlackduckConnector","apiVersion":"apps/v1","kind":"Deployment"} {"level":"info","ts":1633467474.4340327,"logger":"controller-runtime.manager.controller.blackduckconnector-controller","msg":"Starting EventSource","source":"kind source: /v1, Kind=ConfigMap"}

      Immediately after this log statement, the process gets killed.

          1. kernel logs:

      Oct 05 21:18:17 agent0 kernel: helm-operator invoked oom-killer: gfp_mask=0x6000c0(GFP_KERNEL), order=0, oom_score_adj=999
      Oct 05 21:18:17 agent0 kernel: CPU: 4 PID: 1788189 Comm: helm-operator Not tainted 4.18.0-305.19.1.el8_4.x86_64 #1
      Oct 05 21:18:17 agent0 kernel: Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module+el8.3.0+7353+9de0a3cc 04/01/2014
      Oct 05 21:18:17 agent0 kernel: Call Trace:
      Oct 05 21:18:17 agent0 kernel: dump_stack+0x5c/0x80
      Oct 05 21:18:17 agent0 kernel: dump_header+0x4a/0x1db
      Oct 05 21:18:17 agent0 kernel: oom_kill_process.cold.32+0xb/0x10
      Oct 05 21:18:17 agent0 kernel: out_of_memory+0x1ab/0x4a0
      Oct 05 21:18:17 agent0 kernel: mem_cgroup_out_of_memory+0xe8/0x100
      Oct 05 21:18:17 agent0 kernel: try_charge+0x65a/0x690
      Oct 05 21:18:17 agent0 kernel: mem_cgroup_charge+0xca/0x220
      Oct 05 21:18:17 agent0 kernel: do_anonymous_page+0x101/0x380
      Oct 05 21:18:17 agent0 kernel: __handle_mm_fault+0x983/0xca0
      Oct 05 21:18:17 agent0 kernel: ? recalc_sigpending+0x17/0x50
      Oct 05 21:18:17 agent0 kernel: handle_mm_fault+0xc2/0x1d0
      Oct 05 21:18:17 agent0 kernel: __do_page_fault+0x1ed/0x4c0
      Oct 05 21:18:17 agent0 kernel: do_page_fault+0x37/0x130
      Oct 05 21:18:17 agent0 kernel: ? page_fault+0x8/0x30
      Oct 05 21:18:17 agent0 kernel: page_fault+0x1e/0x30
      Oct 05 21:18:17 agent0 kernel: RIP: 0033:0x46ccef
      Oct 05 21:18:17 agent0 kernel: Code: 00 00 c5 fe 6f 06 c5 fe 6f 4e 20 c5 fe 6f 56 40 c5 fe 6f 5e 60 48 81 c6 80 00 00 00 c5 fd e7 07 c5 fd e7 4f 20 c5 fd e7 57 40 <c5> fd e7 5f 60 48 81 c7 80 00 00 00 48 81 eb 80 00 00 00 77 b5 0f
      Oct 05 21:18:17 agent0 kernel: RSP: 002b:000000c001cc0f30 EFLAGS: 00010202
      Oct 05 21:18:17 agent0 kernel: RAX: 000000c005c80000 RBX: 00000000009efde0 RCX: 000000c006c7fe00
      Oct 05 21:18:17 agent0 kernel: RDX: 0000000001fffe00 RSI: 000000c005290020 RDI: 000000c00628ffa0
      Oct 05 21:18:17 agent0 kernel: RBP: 000000c001cc0f78 R08: 000000c005c80000 R09: ffffffffffffffff
      Oct 05 21:18:17 agent0 kernel: R10: 0000000000000020 R11: 0000000000000202 R12: 0000000000000002
      Oct 05 21:18:17 agent0 kernel: R13: 0000000002a86940 R14: 0000000000000000 R15: 000000000046ba00
      Oct 05 21:18:17 agent0 kernel: memory: usage 92160kB, limit 92160kB, failcnt 2628
      Oct 05 21:18:17 agent0 kernel: memory+swap: usage 92160kB, limit 9007199254740988kB, failcnt 0
      Oct 05 21:18:17 agent0 kernel: kmem: usage 1648kB, limit 9007199254740988kB, failcnt 0
      Oct 05 21:18:17 agent0 kernel: Memory cgroup stats for /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67297eb1_2750_426f_9674_d4a20f10d58f.slice:
      Oct 05 21:18:17 agent0 kernel: anon 92323840
      file 0
      kernel_stack 516096
      slab 328808
      percpu 0
      sock 0
      shmem 0
      file_mapped 0
      file_dirty 0
      file_writeback 0
      anon_thp 0
      inactive_anon 92336128
      active_anon 135168
      inactive_file 0
      active_file 0
      unevictable 0
      slab_reclaimable 258032
      slab_unreclaimable 70776
      pgfault 102884
      pgmajfault 0
      workingset_refault_anon 0
      workingset_refault_file 0
      workingset_activate_anon 0
      workingset_activate_file 0
      workingset_restore_anon 0
      workingset_restore_file 0
      workingset_nodereclaim 0
      pgrefill 1389
      pgscan 85042
      pgsteal 64289
      pgactivate 11065
      pgdeactivate 1389
      pglazyfree 73798
      pglazyfreed 64250
      thp_fault_alloc 414
      thp_collapse_alloc 0
      Oct 05 21:18:17 agent0 kernel: Tasks state (memory values in pages):
      Oct 05 21:18:17 agent0 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
      Oct 05 21:18:17 agent0 kernel: [1650942] 0 1650942 35955 567 167936 0 -1000 conmon
      Oct 05 21:18:17 agent0 kernel: [1650963] 0 1650963 245 1 32768 0 -998 pod
      Oct 05 21:18:17 agent0 kernel: [1788122] 0 1788122 35955 609 167936 0 -1000 conmon
      Oct 05 21:18:17 agent0 kernel: [1788138] 1000390000 1788138 203811 29672 368640 0 999 helm-operator
      Oct 05 21:18:17 agent0 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=crio-f075bb3452a0a9a8db4a8120d18a3426a36438bd86f09d95fe1a4aa7f8d27021.scope,mems_allowed=0,oom_memcg=/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67297eb1_2750_426f_9674_d4a20f10d58f.slice,task_memcg=/kubepod>
      Oct 05 21:18:17 agent0 kernel: Memory cgroup out of memory: Killed process 1788138 (helm-operator) total-vm:815244kB, anon-rss:88828kB, file-rss:29860kB, shmem-rss:0kB, UID:1000390000 pgtables:360kB oom_score_adj:999
      Oct 05 21:18:17 agent0 kernel: oom_reaper: reaped process 1788138 (helm-operator), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

              tocampbe@redhat.com Tony Campbell
              mhrivnak@redhat.com Michael Hrivnak
              Tony Campbell Tony Campbell
              Red Hat Employee
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: